00:00:00.000 Started by upstream project "autotest-per-patch" build number 132812 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:01.467 The recommended git tool is: git 00:00:01.467 using credential 00000000-0000-0000-0000-000000000002 00:00:01.469 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.480 Fetching changes from the remote Git repository 00:00:01.481 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.494 Using shallow fetch with depth 1 00:00:01.494 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.494 > git --version # timeout=10 00:00:01.505 > git --version # 'git version 2.39.2' 00:00:01.505 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.516 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.516 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.722 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.734 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.746 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.746 > git config core.sparsecheckout # timeout=10 00:00:06.758 > git read-tree -mu HEAD # timeout=10 00:00:06.775 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.793 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.793 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.926 [Pipeline] Start of Pipeline 00:00:06.936 [Pipeline] library 00:00:06.937 Loading library shm_lib@master 00:00:06.937 Library shm_lib@master is cached. Copying from home. 00:00:06.950 [Pipeline] node 00:00:06.959 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.960 [Pipeline] { 00:00:06.969 [Pipeline] catchError 00:00:06.970 [Pipeline] { 00:00:06.979 [Pipeline] wrap 00:00:06.984 [Pipeline] { 00:00:06.990 [Pipeline] stage 00:00:06.991 [Pipeline] { (Prologue) 00:00:07.003 [Pipeline] echo 00:00:07.004 Node: VM-host-WFP1 00:00:07.008 [Pipeline] cleanWs 00:00:07.017 [WS-CLEANUP] Deleting project workspace... 00:00:07.017 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.023 [WS-CLEANUP] done 00:00:07.294 [Pipeline] setCustomBuildProperty 00:00:07.374 [Pipeline] httpRequest 00:00:07.759 [Pipeline] echo 00:00:07.760 Sorcerer 10.211.164.112 is alive 00:00:07.767 [Pipeline] retry 00:00:07.769 [Pipeline] { 00:00:07.780 [Pipeline] httpRequest 00:00:07.784 HttpMethod: GET 00:00:07.784 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.784 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.789 Response Code: HTTP/1.1 200 OK 00:00:07.789 Success: Status code 200 is in the accepted range: 200,404 00:00:07.790 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.008 [Pipeline] } 00:00:31.023 [Pipeline] // retry 00:00:31.029 [Pipeline] sh 00:00:31.313 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.330 [Pipeline] httpRequest 00:00:32.081 [Pipeline] echo 00:00:32.083 Sorcerer 10.211.164.112 is alive 00:00:32.089 [Pipeline] retry 00:00:32.091 [Pipeline] { 00:00:32.102 [Pipeline] httpRequest 00:00:32.106 HttpMethod: GET 00:00:32.106 URL: http://10.211.164.112/packages/spdk_f804716326904236f3c92ef63215a9f84395ddb4.tar.gz 00:00:32.107 Sending request to url: http://10.211.164.112/packages/spdk_f804716326904236f3c92ef63215a9f84395ddb4.tar.gz 00:00:32.113 Response Code: HTTP/1.1 200 OK 00:00:32.114 Success: Status code 200 is in the accepted range: 200,404 00:00:32.115 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_f804716326904236f3c92ef63215a9f84395ddb4.tar.gz 00:07:07.834 [Pipeline] } 00:07:07.849 [Pipeline] // retry 00:07:07.855 [Pipeline] sh 00:07:08.143 + tar --no-same-owner -xf spdk_f804716326904236f3c92ef63215a9f84395ddb4.tar.gz 00:07:10.759 [Pipeline] sh 00:07:11.038 + git -C spdk log --oneline -n5 00:07:11.038 f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:07:11.038 969b360d9 thread: fd_group-based interrupts 00:07:11.038 851f166ec thread: move interrupt allocation to a function 00:07:11.038 c12cb8fe3 util: add method for setting fd_group's wrapper 00:07:11.038 43c35d804 util: multi-level fd_group nesting 00:07:11.055 [Pipeline] writeFile 00:07:11.070 [Pipeline] sh 00:07:11.357 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:11.368 [Pipeline] sh 00:07:11.650 + cat autorun-spdk.conf 00:07:11.650 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:11.650 SPDK_TEST_NVME=1 00:07:11.650 SPDK_TEST_FTL=1 00:07:11.650 SPDK_TEST_ISAL=1 00:07:11.650 SPDK_RUN_ASAN=1 00:07:11.650 SPDK_RUN_UBSAN=1 00:07:11.650 SPDK_TEST_XNVME=1 00:07:11.650 SPDK_TEST_NVME_FDP=1 00:07:11.650 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:11.658 RUN_NIGHTLY=0 00:07:11.659 [Pipeline] } 00:07:11.672 [Pipeline] // stage 00:07:11.686 [Pipeline] stage 00:07:11.688 [Pipeline] { (Run VM) 00:07:11.699 [Pipeline] sh 00:07:11.981 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:11.981 + echo 'Start stage prepare_nvme.sh' 00:07:11.981 Start stage prepare_nvme.sh 00:07:11.981 + [[ -n 0 ]] 00:07:11.981 + disk_prefix=ex0 00:07:11.981 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:07:11.981 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:07:11.981 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:07:11.981 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:11.981 ++ SPDK_TEST_NVME=1 00:07:11.981 ++ SPDK_TEST_FTL=1 00:07:11.981 ++ SPDK_TEST_ISAL=1 00:07:11.981 ++ SPDK_RUN_ASAN=1 00:07:11.981 ++ SPDK_RUN_UBSAN=1 00:07:11.981 ++ SPDK_TEST_XNVME=1 00:07:11.981 ++ SPDK_TEST_NVME_FDP=1 00:07:11.981 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:11.981 ++ RUN_NIGHTLY=0 00:07:11.981 + cd /var/jenkins/workspace/nvme-vg-autotest 00:07:11.981 + nvme_files=() 00:07:11.981 + declare -A nvme_files 00:07:11.981 + backend_dir=/var/lib/libvirt/images/backends 00:07:11.981 + nvme_files['nvme.img']=5G 00:07:11.981 + nvme_files['nvme-cmb.img']=5G 00:07:11.981 + nvme_files['nvme-multi0.img']=4G 00:07:11.981 + nvme_files['nvme-multi1.img']=4G 00:07:11.981 + nvme_files['nvme-multi2.img']=4G 00:07:11.981 + nvme_files['nvme-openstack.img']=8G 00:07:11.981 + nvme_files['nvme-zns.img']=5G 00:07:11.981 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:11.981 + (( SPDK_TEST_FTL == 1 )) 00:07:11.981 + nvme_files["nvme-ftl.img"]=6G 00:07:11.981 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:11.981 + nvme_files["nvme-fdp.img"]=1G 00:07:11.981 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:11.981 + for nvme in "${!nvme_files[@]}" 00:07:11.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:07:11.981 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:11.981 + for nvme in "${!nvme_files[@]}" 00:07:11.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:07:11.981 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:07:11.981 + for nvme in "${!nvme_files[@]}" 00:07:11.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:07:11.981 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:11.981 + for nvme in "${!nvme_files[@]}" 00:07:11.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:07:12.239 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:12.239 + for nvme in "${!nvme_files[@]}" 00:07:12.239 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:07:12.239 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:12.239 + for nvme in "${!nvme_files[@]}" 00:07:12.239 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:07:12.239 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:12.239 + for nvme in "${!nvme_files[@]}" 00:07:12.239 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:07:12.239 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:12.239 + for nvme in "${!nvme_files[@]}" 00:07:12.239 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:07:12.239 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:07:12.239 + for nvme in "${!nvme_files[@]}" 00:07:12.239 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:07:12.497 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:12.497 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:07:12.497 + echo 'End stage prepare_nvme.sh' 00:07:12.497 End stage prepare_nvme.sh 00:07:12.507 [Pipeline] sh 00:07:12.852 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:12.852 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:07:12.852 00:07:12.852 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:07:12.852 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:07:12.852 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:07:12.852 HELP=0 00:07:12.852 DRY_RUN=0 00:07:12.852 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:07:12.852 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:07:12.852 NVME_AUTO_CREATE=0 00:07:12.852 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:07:12.852 NVME_CMB=,,,, 00:07:12.852 NVME_PMR=,,,, 00:07:12.852 NVME_ZNS=,,,, 00:07:12.852 NVME_MS=true,,,, 00:07:12.852 NVME_FDP=,,,on, 00:07:12.852 SPDK_VAGRANT_DISTRO=fedora39 00:07:12.852 SPDK_VAGRANT_VMCPU=10 00:07:12.852 SPDK_VAGRANT_VMRAM=12288 00:07:12.852 SPDK_VAGRANT_PROVIDER=libvirt 00:07:12.852 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:12.852 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:12.852 SPDK_OPENSTACK_NETWORK=0 00:07:12.852 VAGRANT_PACKAGE_BOX=0 00:07:12.852 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:12.852 FORCE_DISTRO=true 00:07:12.852 VAGRANT_BOX_VERSION= 00:07:12.852 EXTRA_VAGRANTFILES= 00:07:12.852 NIC_MODEL=e1000 00:07:12.852 00:07:12.852 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:07:12.852 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:07:15.385 Bringing machine 'default' up with 'libvirt' provider... 00:07:16.762 ==> default: Creating image (snapshot of base box volume). 00:07:17.023 ==> default: Creating domain with the following settings... 00:07:17.023 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733784523_0a40cc948cfae3159376 00:07:17.023 ==> default: -- Domain type: kvm 00:07:17.023 ==> default: -- Cpus: 10 00:07:17.023 ==> default: -- Feature: acpi 00:07:17.023 ==> default: -- Feature: apic 00:07:17.023 ==> default: -- Feature: pae 00:07:17.023 ==> default: -- Memory: 12288M 00:07:17.023 ==> default: -- Memory Backing: hugepages: 00:07:17.023 ==> default: -- Management MAC: 00:07:17.023 ==> default: -- Loader: 00:07:17.023 ==> default: -- Nvram: 00:07:17.023 ==> default: -- Base box: spdk/fedora39 00:07:17.023 ==> default: -- Storage pool: default 00:07:17.023 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733784523_0a40cc948cfae3159376.img (20G) 00:07:17.023 ==> default: -- Volume Cache: default 00:07:17.023 ==> default: -- Kernel: 00:07:17.023 ==> default: -- Initrd: 00:07:17.023 ==> default: -- Graphics Type: vnc 00:07:17.023 ==> default: -- Graphics Port: -1 00:07:17.023 ==> default: -- Graphics IP: 127.0.0.1 00:07:17.023 ==> default: -- Graphics Password: Not defined 00:07:17.023 ==> default: -- Video Type: cirrus 00:07:17.023 ==> default: -- Video VRAM: 9216 00:07:17.023 ==> default: -- Sound Type: 00:07:17.023 ==> default: -- Keymap: en-us 00:07:17.023 ==> default: -- TPM Path: 00:07:17.023 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:17.023 ==> default: -- Command line args: 00:07:17.023 ==> default: -> value=-device, 00:07:17.023 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:17.023 ==> default: -> value=-drive, 00:07:17.023 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:07:17.023 ==> default: -> value=-device, 00:07:17.023 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:07:17.023 ==> default: -> value=-device, 00:07:17.023 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:17.023 ==> default: -> value=-drive, 00:07:17.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:07:17.024 ==> default: -> value=-drive, 00:07:17.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:17.024 ==> default: -> value=-drive, 00:07:17.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:17.024 ==> default: -> value=-drive, 00:07:17.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:07:17.024 ==> default: -> value=-drive, 00:07:17.024 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:07:17.024 ==> default: -> value=-device, 00:07:17.024 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:17.592 ==> default: Creating shared folders metadata... 00:07:17.592 ==> default: Starting domain. 00:07:20.138 ==> default: Waiting for domain to get an IP address... 00:07:42.163 ==> default: Waiting for SSH to become available... 00:07:42.163 ==> default: Configuring and enabling network interfaces... 00:07:47.432 default: SSH address: 192.168.121.123:22 00:07:47.432 default: SSH username: vagrant 00:07:47.432 default: SSH auth method: private key 00:07:49.960 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:07:59.936 ==> default: Mounting SSHFS shared folder... 00:08:01.309 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:08:01.309 ==> default: Checking Mount.. 00:08:02.683 ==> default: Folder Successfully Mounted! 00:08:02.683 ==> default: Running provisioner: file... 00:08:04.053 default: ~/.gitconfig => .gitconfig 00:08:04.312 00:08:04.312 SUCCESS! 00:08:04.312 00:08:04.312 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:08:04.312 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:04.312 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:08:04.312 00:08:04.321 [Pipeline] } 00:08:04.335 [Pipeline] // stage 00:08:04.343 [Pipeline] dir 00:08:04.343 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:08:04.345 [Pipeline] { 00:08:04.355 [Pipeline] catchError 00:08:04.357 [Pipeline] { 00:08:04.368 [Pipeline] sh 00:08:04.668 + vagrant ssh-config --host vagrant 00:08:04.668 + sed -ne /^Host/,$p 00:08:04.668 + tee ssh_conf 00:08:07.948 Host vagrant 00:08:07.948 HostName 192.168.121.123 00:08:07.948 User vagrant 00:08:07.948 Port 22 00:08:07.948 UserKnownHostsFile /dev/null 00:08:07.948 StrictHostKeyChecking no 00:08:07.948 PasswordAuthentication no 00:08:07.948 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:08:07.948 IdentitiesOnly yes 00:08:07.948 LogLevel FATAL 00:08:07.948 ForwardAgent yes 00:08:07.948 ForwardX11 yes 00:08:07.948 00:08:07.960 [Pipeline] withEnv 00:08:07.962 [Pipeline] { 00:08:07.972 [Pipeline] sh 00:08:08.248 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:08.248 source /etc/os-release 00:08:08.248 [[ -e /image.version ]] && img=$(< /image.version) 00:08:08.248 # Minimal, systemd-like check. 00:08:08.248 if [[ -e /.dockerenv ]]; then 00:08:08.248 # Clear garbage from the node's name: 00:08:08.248 # agt-er_autotest_547-896 -> autotest_547-896 00:08:08.248 # $HOSTNAME is the actual container id 00:08:08.248 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:08.248 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:08:08.248 # We can assume this is a mount from a host where container is running, 00:08:08.248 # so fetch its hostname to easily identify the target swarm worker. 00:08:08.248 container="$(< /etc/hostname) ($agent)" 00:08:08.248 else 00:08:08.248 # Fallback 00:08:08.248 container=$agent 00:08:08.248 fi 00:08:08.248 fi 00:08:08.248 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:08.248 00:08:08.515 [Pipeline] } 00:08:08.530 [Pipeline] // withEnv 00:08:08.539 [Pipeline] setCustomBuildProperty 00:08:08.552 [Pipeline] stage 00:08:08.554 [Pipeline] { (Tests) 00:08:08.568 [Pipeline] sh 00:08:08.847 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:09.119 [Pipeline] sh 00:08:09.401 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:08:09.674 [Pipeline] timeout 00:08:09.675 Timeout set to expire in 50 min 00:08:09.676 [Pipeline] { 00:08:09.690 [Pipeline] sh 00:08:09.968 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:10.534 HEAD is now at f80471632 nvme: add spdk_nvme_poll_group_get_fd_group() 00:08:10.544 [Pipeline] sh 00:08:10.821 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:11.091 [Pipeline] sh 00:08:11.369 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:11.641 [Pipeline] sh 00:08:11.921 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:08:12.179 ++ readlink -f spdk_repo 00:08:12.179 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:12.179 + [[ -n /home/vagrant/spdk_repo ]] 00:08:12.179 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:12.179 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:12.179 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:12.179 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:12.179 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:12.179 + [[ nvme-vg-autotest == pkgdep-* ]] 00:08:12.179 + cd /home/vagrant/spdk_repo 00:08:12.179 + source /etc/os-release 00:08:12.179 ++ NAME='Fedora Linux' 00:08:12.179 ++ VERSION='39 (Cloud Edition)' 00:08:12.179 ++ ID=fedora 00:08:12.179 ++ VERSION_ID=39 00:08:12.179 ++ VERSION_CODENAME= 00:08:12.179 ++ PLATFORM_ID=platform:f39 00:08:12.179 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:08:12.179 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:12.179 ++ LOGO=fedora-logo-icon 00:08:12.179 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:08:12.179 ++ HOME_URL=https://fedoraproject.org/ 00:08:12.179 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:08:12.179 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:12.179 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:12.179 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:12.179 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:08:12.179 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:12.179 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:08:12.179 ++ SUPPORT_END=2024-11-12 00:08:12.179 ++ VARIANT='Cloud Edition' 00:08:12.179 ++ VARIANT_ID=cloud 00:08:12.179 + uname -a 00:08:12.179 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:08:12.179 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:12.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:13.104 Hugepages 00:08:13.104 node hugesize free / total 00:08:13.104 node0 1048576kB 0 / 0 00:08:13.104 node0 2048kB 0 / 0 00:08:13.104 00:08:13.104 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:13.104 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:13.104 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:08:13.104 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:13.104 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:13.104 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:08:13.363 + rm -f /tmp/spdk-ld-path 00:08:13.363 + source autorun-spdk.conf 00:08:13.363 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:13.363 ++ SPDK_TEST_NVME=1 00:08:13.363 ++ SPDK_TEST_FTL=1 00:08:13.363 ++ SPDK_TEST_ISAL=1 00:08:13.363 ++ SPDK_RUN_ASAN=1 00:08:13.363 ++ SPDK_RUN_UBSAN=1 00:08:13.363 ++ SPDK_TEST_XNVME=1 00:08:13.363 ++ SPDK_TEST_NVME_FDP=1 00:08:13.363 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:13.363 ++ RUN_NIGHTLY=0 00:08:13.363 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:13.363 + [[ -n '' ]] 00:08:13.363 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:13.363 + for M in /var/spdk/build-*-manifest.txt 00:08:13.363 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:08:13.363 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:13.363 + for M in /var/spdk/build-*-manifest.txt 00:08:13.363 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:13.363 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:13.363 + for M in /var/spdk/build-*-manifest.txt 00:08:13.363 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:13.363 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:13.363 ++ uname 00:08:13.363 + [[ Linux == \L\i\n\u\x ]] 00:08:13.363 + sudo dmesg -T 00:08:13.363 + sudo dmesg --clear 00:08:13.363 + dmesg_pid=5251 00:08:13.363 + [[ Fedora Linux == FreeBSD ]] 00:08:13.363 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.363 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.363 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:13.363 + [[ -x /usr/src/fio-static/fio ]] 00:08:13.363 + export FIO_BIN=/usr/src/fio-static/fio 00:08:13.363 + FIO_BIN=/usr/src/fio-static/fio 00:08:13.363 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:13.363 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:13.363 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:13.363 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.363 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.363 + sudo dmesg -Tw 00:08:13.363 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:13.363 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.363 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.363 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:13.363 22:49:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:08:13.363 22:49:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:13.363 22:49:40 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:08:13.363 22:49:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:08:13.363 22:49:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:13.621 22:49:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:08:13.621 22:49:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.621 22:49:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:08:13.621 22:49:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:13.621 22:49:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.621 22:49:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.621 22:49:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.621 22:49:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.621 22:49:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.621 22:49:40 -- paths/export.sh@5 -- $ export PATH 00:08:13.622 22:49:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.622 22:49:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:13.622 22:49:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:08:13.622 22:49:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733784580.XXXXXX 00:08:13.622 22:49:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733784580.cNP8oR 00:08:13.622 22:49:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:08:13.622 22:49:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:08:13.622 22:49:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:13.622 22:49:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:13.622 22:49:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:13.622 22:49:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:08:13.622 22:49:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:08:13.622 22:49:40 -- common/autotest_common.sh@10 -- $ set +x 00:08:13.622 22:49:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:08:13.622 22:49:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:08:13.622 22:49:40 -- pm/common@17 -- $ local monitor 00:08:13.622 22:49:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:13.622 22:49:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:13.622 22:49:40 -- pm/common@25 -- $ sleep 1 00:08:13.622 22:49:40 -- pm/common@21 -- $ date +%s 00:08:13.622 22:49:40 -- pm/common@21 -- $ date +%s 00:08:13.622 22:49:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733784580 00:08:13.622 22:49:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733784580 00:08:13.622 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733784580_collect-vmstat.pm.log 00:08:13.622 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733784580_collect-cpu-load.pm.log 00:08:14.556 22:49:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:08:14.556 22:49:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:14.556 22:49:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:14.556 22:49:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:14.556 22:49:41 -- spdk/autobuild.sh@16 -- $ date -u 00:08:14.556 Mon Dec 9 10:49:41 PM UTC 2024 00:08:14.556 22:49:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:14.556 v25.01-pre-319-gf80471632 00:08:14.556 22:49:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:08:14.556 22:49:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:08:14.556 22:49:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:14.556 22:49:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:14.556 22:49:41 -- common/autotest_common.sh@10 -- $ set +x 00:08:14.556 ************************************ 00:08:14.556 START TEST asan 00:08:14.556 ************************************ 00:08:14.556 using asan 00:08:14.556 22:49:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:08:14.556 00:08:14.556 real 0m0.000s 00:08:14.556 user 0m0.000s 00:08:14.556 sys 0m0.000s 00:08:14.556 22:49:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:14.556 22:49:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:08:14.556 ************************************ 00:08:14.556 END TEST asan 00:08:14.556 ************************************ 00:08:14.814 22:49:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:14.814 22:49:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:14.814 22:49:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:14.814 22:49:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:14.814 22:49:41 -- common/autotest_common.sh@10 -- $ set +x 00:08:14.814 ************************************ 00:08:14.814 START TEST ubsan 00:08:14.814 ************************************ 00:08:14.814 using ubsan 00:08:14.814 22:49:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:08:14.814 00:08:14.814 real 0m0.000s 00:08:14.814 user 0m0.000s 00:08:14.814 sys 0m0.000s 00:08:14.814 22:49:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:14.814 22:49:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:14.814 ************************************ 00:08:14.814 END TEST ubsan 00:08:14.814 ************************************ 00:08:14.814 22:49:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:14.814 22:49:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:14.814 22:49:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:14.814 22:49:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:14.814 22:49:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:14.814 22:49:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:14.815 22:49:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:14.815 22:49:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:14.815 22:49:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:08:14.815 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:14.815 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:15.381 Using 'verbs' RDMA provider 00:08:34.841 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:08:49.745 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:08:49.745 Creating mk/config.mk...done. 00:08:49.745 Creating mk/cc.flags.mk...done. 00:08:49.745 Type 'make' to build. 00:08:49.745 22:50:15 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:08:49.745 22:50:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:08:49.746 22:50:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:08:49.746 22:50:15 -- common/autotest_common.sh@10 -- $ set +x 00:08:49.746 ************************************ 00:08:49.746 START TEST make 00:08:49.746 ************************************ 00:08:49.746 22:50:15 make -- common/autotest_common.sh@1129 -- $ make -j10 00:08:49.746 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:08:49.746 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:08:49.746 meson setup builddir \ 00:08:49.746 -Dwith-libaio=enabled \ 00:08:49.746 -Dwith-liburing=enabled \ 00:08:49.746 -Dwith-libvfn=disabled \ 00:08:49.746 -Dwith-spdk=disabled \ 00:08:49.746 -Dexamples=false \ 00:08:49.746 -Dtests=false \ 00:08:49.746 -Dtools=false && \ 00:08:49.746 meson compile -C builddir && \ 00:08:49.746 cd -) 00:08:49.746 make[1]: Nothing to be done for 'all'. 00:08:52.277 The Meson build system 00:08:52.278 Version: 1.5.0 00:08:52.278 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:08:52.278 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:08:52.278 Build type: native build 00:08:52.278 Project name: xnvme 00:08:52.278 Project version: 0.7.5 00:08:52.278 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:08:52.278 C linker for the host machine: cc ld.bfd 2.40-14 00:08:52.278 Host machine cpu family: x86_64 00:08:52.278 Host machine cpu: x86_64 00:08:52.278 Message: host_machine.system: linux 00:08:52.278 Compiler for C supports arguments -Wno-missing-braces: YES 00:08:52.278 Compiler for C supports arguments -Wno-cast-function-type: YES 00:08:52.278 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:08:52.278 Run-time dependency threads found: YES 00:08:52.278 Has header "setupapi.h" : NO 00:08:52.278 Has header "linux/blkzoned.h" : YES 00:08:52.278 Has header "linux/blkzoned.h" : YES (cached) 00:08:52.278 Has header "libaio.h" : YES 00:08:52.278 Library aio found: YES 00:08:52.278 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:52.278 Run-time dependency liburing found: YES 2.2 00:08:52.278 Dependency libvfn skipped: feature with-libvfn disabled 00:08:52.278 Found CMake: /usr/bin/cmake (3.27.7) 00:08:52.278 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:08:52.278 Subproject spdk : skipped: feature with-spdk disabled 00:08:52.278 Run-time dependency appleframeworks found: NO (tried framework) 00:08:52.278 Run-time dependency appleframeworks found: NO (tried framework) 00:08:52.278 Library rt found: YES 00:08:52.278 Checking for function "clock_gettime" with dependency -lrt: YES 00:08:52.278 Configuring xnvme_config.h using configuration 00:08:52.278 Configuring xnvme.spec using configuration 00:08:52.278 Run-time dependency bash-completion found: YES 2.11 00:08:52.278 Message: Bash-completions: /usr/share/bash-completion/completions 00:08:52.278 Program cp found: YES (/usr/bin/cp) 00:08:52.278 Build targets in project: 3 00:08:52.278 00:08:52.278 xnvme 0.7.5 00:08:52.278 00:08:52.278 Subprojects 00:08:52.278 spdk : NO Feature 'with-spdk' disabled 00:08:52.278 00:08:52.278 User defined options 00:08:52.278 examples : false 00:08:52.278 tests : false 00:08:52.278 tools : false 00:08:52.278 with-libaio : enabled 00:08:52.278 with-liburing: enabled 00:08:52.278 with-libvfn : disabled 00:08:52.278 with-spdk : disabled 00:08:52.278 00:08:52.278 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:52.536 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:08:52.536 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:08:52.797 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:08:52.797 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:08:52.797 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:08:52.797 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:08:52.797 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:08:52.797 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:08:52.797 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:08:52.797 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:08:52.797 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:08:52.797 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:08:52.797 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:08:53.055 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:08:53.055 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:08:53.055 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:08:53.055 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:08:53.055 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:08:53.055 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:08:53.055 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:08:53.055 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:08:53.055 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:08:53.055 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:08:53.055 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:08:53.055 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:08:53.055 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:08:53.055 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:08:53.055 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:08:53.055 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:08:53.055 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:08:53.055 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:08:53.055 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:08:53.055 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:08:53.312 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:08:53.312 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:08:53.312 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:08:53.312 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:08:53.312 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:08:53.312 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:08:53.312 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:08:53.312 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:08:53.312 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:08:53.312 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:08:53.312 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:08:53.312 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:08:53.312 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:08:53.312 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:08:53.312 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:08:53.312 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:08:53.312 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:08:53.313 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:08:53.313 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:08:53.313 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:08:53.313 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:08:53.313 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:08:53.571 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:08:53.571 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:08:53.571 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:08:53.571 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:08:53.571 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:08:53.571 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:08:53.571 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:08:53.571 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:08:53.571 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:08:53.571 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:08:53.571 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:08:53.571 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:08:53.829 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:08:53.829 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:08:53.829 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:08:53.829 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:08:53.829 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:08:53.829 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:08:53.829 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:08:54.087 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:08:54.345 [75/76] Linking target lib/libxnvme.so.0.7.5 00:08:54.345 [76/76] Linking static target lib/libxnvme.a 00:08:54.345 INFO: autodetecting backend as ninja 00:08:54.345 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:08:54.345 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:09:04.320 The Meson build system 00:09:04.320 Version: 1.5.0 00:09:04.320 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:09:04.320 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:04.320 Build type: native build 00:09:04.320 Program cat found: YES (/usr/bin/cat) 00:09:04.320 Project name: DPDK 00:09:04.320 Project version: 24.03.0 00:09:04.320 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:09:04.320 C linker for the host machine: cc ld.bfd 2.40-14 00:09:04.320 Host machine cpu family: x86_64 00:09:04.320 Host machine cpu: x86_64 00:09:04.320 Message: ## Building in Developer Mode ## 00:09:04.320 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:04.320 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:09:04.320 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:04.320 Program python3 found: YES (/usr/bin/python3) 00:09:04.320 Program cat found: YES (/usr/bin/cat) 00:09:04.320 Compiler for C supports arguments -march=native: YES 00:09:04.320 Checking for size of "void *" : 8 00:09:04.320 Checking for size of "void *" : 8 (cached) 00:09:04.320 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:09:04.320 Library m found: YES 00:09:04.320 Library numa found: YES 00:09:04.320 Has header "numaif.h" : YES 00:09:04.320 Library fdt found: NO 00:09:04.320 Library execinfo found: NO 00:09:04.320 Has header "execinfo.h" : YES 00:09:04.320 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:09:04.320 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:04.320 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:04.320 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:04.320 Run-time dependency openssl found: YES 3.1.1 00:09:04.320 Run-time dependency libpcap found: YES 1.10.4 00:09:04.320 Has header "pcap.h" with dependency libpcap: YES 00:09:04.320 Compiler for C supports arguments -Wcast-qual: YES 00:09:04.320 Compiler for C supports arguments -Wdeprecated: YES 00:09:04.320 Compiler for C supports arguments -Wformat: YES 00:09:04.320 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:04.320 Compiler for C supports arguments -Wformat-security: NO 00:09:04.320 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:04.320 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:04.320 Compiler for C supports arguments -Wnested-externs: YES 00:09:04.320 Compiler for C supports arguments -Wold-style-definition: YES 00:09:04.320 Compiler for C supports arguments -Wpointer-arith: YES 00:09:04.320 Compiler for C supports arguments -Wsign-compare: YES 00:09:04.320 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:04.320 Compiler for C supports arguments -Wundef: YES 00:09:04.320 Compiler for C supports arguments -Wwrite-strings: YES 00:09:04.320 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:04.320 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:04.320 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:04.320 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:04.320 Program objdump found: YES (/usr/bin/objdump) 00:09:04.320 Compiler for C supports arguments -mavx512f: YES 00:09:04.320 Checking if "AVX512 checking" compiles: YES 00:09:04.320 Fetching value of define "__SSE4_2__" : 1 00:09:04.320 Fetching value of define "__AES__" : 1 00:09:04.320 Fetching value of define "__AVX__" : 1 00:09:04.320 Fetching value of define "__AVX2__" : 1 00:09:04.320 Fetching value of define "__AVX512BW__" : 1 00:09:04.320 Fetching value of define "__AVX512CD__" : 1 00:09:04.320 Fetching value of define "__AVX512DQ__" : 1 00:09:04.320 Fetching value of define "__AVX512F__" : 1 00:09:04.320 Fetching value of define "__AVX512VL__" : 1 00:09:04.320 Fetching value of define "__PCLMUL__" : 1 00:09:04.320 Fetching value of define "__RDRND__" : 1 00:09:04.320 Fetching value of define "__RDSEED__" : 1 00:09:04.321 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:04.321 Fetching value of define "__znver1__" : (undefined) 00:09:04.321 Fetching value of define "__znver2__" : (undefined) 00:09:04.321 Fetching value of define "__znver3__" : (undefined) 00:09:04.321 Fetching value of define "__znver4__" : (undefined) 00:09:04.321 Library asan found: YES 00:09:04.321 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:04.321 Message: lib/log: Defining dependency "log" 00:09:04.321 Message: lib/kvargs: Defining dependency "kvargs" 00:09:04.321 Message: lib/telemetry: Defining dependency "telemetry" 00:09:04.321 Library rt found: YES 00:09:04.321 Checking for function "getentropy" : NO 00:09:04.321 Message: lib/eal: Defining dependency "eal" 00:09:04.321 Message: lib/ring: Defining dependency "ring" 00:09:04.321 Message: lib/rcu: Defining dependency "rcu" 00:09:04.321 Message: lib/mempool: Defining dependency "mempool" 00:09:04.321 Message: lib/mbuf: Defining dependency "mbuf" 00:09:04.321 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:04.321 Fetching value of define "__AVX512F__" : 1 (cached) 00:09:04.321 Fetching value of define "__AVX512BW__" : 1 (cached) 00:09:04.321 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:09:04.321 Fetching value of define "__AVX512VL__" : 1 (cached) 00:09:04.321 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:09:04.321 Compiler for C supports arguments -mpclmul: YES 00:09:04.321 Compiler for C supports arguments -maes: YES 00:09:04.321 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:04.321 Compiler for C supports arguments -mavx512bw: YES 00:09:04.321 Compiler for C supports arguments -mavx512dq: YES 00:09:04.321 Compiler for C supports arguments -mavx512vl: YES 00:09:04.321 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:04.321 Compiler for C supports arguments -mavx2: YES 00:09:04.321 Compiler for C supports arguments -mavx: YES 00:09:04.321 Message: lib/net: Defining dependency "net" 00:09:04.321 Message: lib/meter: Defining dependency "meter" 00:09:04.321 Message: lib/ethdev: Defining dependency "ethdev" 00:09:04.321 Message: lib/pci: Defining dependency "pci" 00:09:04.321 Message: lib/cmdline: Defining dependency "cmdline" 00:09:04.321 Message: lib/hash: Defining dependency "hash" 00:09:04.321 Message: lib/timer: Defining dependency "timer" 00:09:04.321 Message: lib/compressdev: Defining dependency "compressdev" 00:09:04.321 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:04.321 Message: lib/dmadev: Defining dependency "dmadev" 00:09:04.321 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:04.321 Message: lib/power: Defining dependency "power" 00:09:04.321 Message: lib/reorder: Defining dependency "reorder" 00:09:04.321 Message: lib/security: Defining dependency "security" 00:09:04.321 Has header "linux/userfaultfd.h" : YES 00:09:04.321 Has header "linux/vduse.h" : YES 00:09:04.321 Message: lib/vhost: Defining dependency "vhost" 00:09:04.321 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:04.321 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:04.321 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:04.321 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:04.321 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:04.321 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:04.321 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:04.321 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:04.321 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:04.321 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:04.321 Program doxygen found: YES (/usr/local/bin/doxygen) 00:09:04.321 Configuring doxy-api-html.conf using configuration 00:09:04.321 Configuring doxy-api-man.conf using configuration 00:09:04.321 Program mandb found: YES (/usr/bin/mandb) 00:09:04.321 Program sphinx-build found: NO 00:09:04.321 Configuring rte_build_config.h using configuration 00:09:04.321 Message: 00:09:04.321 ================= 00:09:04.321 Applications Enabled 00:09:04.321 ================= 00:09:04.321 00:09:04.321 apps: 00:09:04.321 00:09:04.321 00:09:04.321 Message: 00:09:04.321 ================= 00:09:04.321 Libraries Enabled 00:09:04.321 ================= 00:09:04.321 00:09:04.321 libs: 00:09:04.321 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:04.321 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:04.321 cryptodev, dmadev, power, reorder, security, vhost, 00:09:04.321 00:09:04.321 Message: 00:09:04.321 =============== 00:09:04.321 Drivers Enabled 00:09:04.321 =============== 00:09:04.321 00:09:04.321 common: 00:09:04.321 00:09:04.321 bus: 00:09:04.321 pci, vdev, 00:09:04.321 mempool: 00:09:04.321 ring, 00:09:04.321 dma: 00:09:04.321 00:09:04.321 net: 00:09:04.321 00:09:04.321 crypto: 00:09:04.321 00:09:04.321 compress: 00:09:04.321 00:09:04.321 vdpa: 00:09:04.321 00:09:04.321 00:09:04.321 Message: 00:09:04.321 ================= 00:09:04.321 Content Skipped 00:09:04.321 ================= 00:09:04.321 00:09:04.321 apps: 00:09:04.321 dumpcap: explicitly disabled via build config 00:09:04.321 graph: explicitly disabled via build config 00:09:04.321 pdump: explicitly disabled via build config 00:09:04.321 proc-info: explicitly disabled via build config 00:09:04.321 test-acl: explicitly disabled via build config 00:09:04.321 test-bbdev: explicitly disabled via build config 00:09:04.321 test-cmdline: explicitly disabled via build config 00:09:04.321 test-compress-perf: explicitly disabled via build config 00:09:04.321 test-crypto-perf: explicitly disabled via build config 00:09:04.321 test-dma-perf: explicitly disabled via build config 00:09:04.321 test-eventdev: explicitly disabled via build config 00:09:04.321 test-fib: explicitly disabled via build config 00:09:04.321 test-flow-perf: explicitly disabled via build config 00:09:04.321 test-gpudev: explicitly disabled via build config 00:09:04.321 test-mldev: explicitly disabled via build config 00:09:04.321 test-pipeline: explicitly disabled via build config 00:09:04.321 test-pmd: explicitly disabled via build config 00:09:04.321 test-regex: explicitly disabled via build config 00:09:04.321 test-sad: explicitly disabled via build config 00:09:04.321 test-security-perf: explicitly disabled via build config 00:09:04.321 00:09:04.321 libs: 00:09:04.321 argparse: explicitly disabled via build config 00:09:04.321 metrics: explicitly disabled via build config 00:09:04.321 acl: explicitly disabled via build config 00:09:04.321 bbdev: explicitly disabled via build config 00:09:04.321 bitratestats: explicitly disabled via build config 00:09:04.321 bpf: explicitly disabled via build config 00:09:04.321 cfgfile: explicitly disabled via build config 00:09:04.321 distributor: explicitly disabled via build config 00:09:04.321 efd: explicitly disabled via build config 00:09:04.321 eventdev: explicitly disabled via build config 00:09:04.321 dispatcher: explicitly disabled via build config 00:09:04.321 gpudev: explicitly disabled via build config 00:09:04.321 gro: explicitly disabled via build config 00:09:04.321 gso: explicitly disabled via build config 00:09:04.321 ip_frag: explicitly disabled via build config 00:09:04.321 jobstats: explicitly disabled via build config 00:09:04.321 latencystats: explicitly disabled via build config 00:09:04.321 lpm: explicitly disabled via build config 00:09:04.321 member: explicitly disabled via build config 00:09:04.321 pcapng: explicitly disabled via build config 00:09:04.321 rawdev: explicitly disabled via build config 00:09:04.321 regexdev: explicitly disabled via build config 00:09:04.321 mldev: explicitly disabled via build config 00:09:04.321 rib: explicitly disabled via build config 00:09:04.321 sched: explicitly disabled via build config 00:09:04.321 stack: explicitly disabled via build config 00:09:04.321 ipsec: explicitly disabled via build config 00:09:04.321 pdcp: explicitly disabled via build config 00:09:04.321 fib: explicitly disabled via build config 00:09:04.321 port: explicitly disabled via build config 00:09:04.321 pdump: explicitly disabled via build config 00:09:04.321 table: explicitly disabled via build config 00:09:04.321 pipeline: explicitly disabled via build config 00:09:04.321 graph: explicitly disabled via build config 00:09:04.321 node: explicitly disabled via build config 00:09:04.321 00:09:04.321 drivers: 00:09:04.321 common/cpt: not in enabled drivers build config 00:09:04.321 common/dpaax: not in enabled drivers build config 00:09:04.321 common/iavf: not in enabled drivers build config 00:09:04.321 common/idpf: not in enabled drivers build config 00:09:04.321 common/ionic: not in enabled drivers build config 00:09:04.321 common/mvep: not in enabled drivers build config 00:09:04.321 common/octeontx: not in enabled drivers build config 00:09:04.321 bus/auxiliary: not in enabled drivers build config 00:09:04.321 bus/cdx: not in enabled drivers build config 00:09:04.321 bus/dpaa: not in enabled drivers build config 00:09:04.321 bus/fslmc: not in enabled drivers build config 00:09:04.321 bus/ifpga: not in enabled drivers build config 00:09:04.321 bus/platform: not in enabled drivers build config 00:09:04.322 bus/uacce: not in enabled drivers build config 00:09:04.322 bus/vmbus: not in enabled drivers build config 00:09:04.322 common/cnxk: not in enabled drivers build config 00:09:04.322 common/mlx5: not in enabled drivers build config 00:09:04.322 common/nfp: not in enabled drivers build config 00:09:04.322 common/nitrox: not in enabled drivers build config 00:09:04.322 common/qat: not in enabled drivers build config 00:09:04.322 common/sfc_efx: not in enabled drivers build config 00:09:04.322 mempool/bucket: not in enabled drivers build config 00:09:04.322 mempool/cnxk: not in enabled drivers build config 00:09:04.322 mempool/dpaa: not in enabled drivers build config 00:09:04.322 mempool/dpaa2: not in enabled drivers build config 00:09:04.322 mempool/octeontx: not in enabled drivers build config 00:09:04.322 mempool/stack: not in enabled drivers build config 00:09:04.322 dma/cnxk: not in enabled drivers build config 00:09:04.322 dma/dpaa: not in enabled drivers build config 00:09:04.322 dma/dpaa2: not in enabled drivers build config 00:09:04.322 dma/hisilicon: not in enabled drivers build config 00:09:04.322 dma/idxd: not in enabled drivers build config 00:09:04.322 dma/ioat: not in enabled drivers build config 00:09:04.322 dma/skeleton: not in enabled drivers build config 00:09:04.322 net/af_packet: not in enabled drivers build config 00:09:04.322 net/af_xdp: not in enabled drivers build config 00:09:04.322 net/ark: not in enabled drivers build config 00:09:04.322 net/atlantic: not in enabled drivers build config 00:09:04.322 net/avp: not in enabled drivers build config 00:09:04.322 net/axgbe: not in enabled drivers build config 00:09:04.322 net/bnx2x: not in enabled drivers build config 00:09:04.322 net/bnxt: not in enabled drivers build config 00:09:04.322 net/bonding: not in enabled drivers build config 00:09:04.322 net/cnxk: not in enabled drivers build config 00:09:04.322 net/cpfl: not in enabled drivers build config 00:09:04.322 net/cxgbe: not in enabled drivers build config 00:09:04.322 net/dpaa: not in enabled drivers build config 00:09:04.322 net/dpaa2: not in enabled drivers build config 00:09:04.322 net/e1000: not in enabled drivers build config 00:09:04.322 net/ena: not in enabled drivers build config 00:09:04.322 net/enetc: not in enabled drivers build config 00:09:04.322 net/enetfec: not in enabled drivers build config 00:09:04.322 net/enic: not in enabled drivers build config 00:09:04.322 net/failsafe: not in enabled drivers build config 00:09:04.322 net/fm10k: not in enabled drivers build config 00:09:04.322 net/gve: not in enabled drivers build config 00:09:04.322 net/hinic: not in enabled drivers build config 00:09:04.322 net/hns3: not in enabled drivers build config 00:09:04.322 net/i40e: not in enabled drivers build config 00:09:04.322 net/iavf: not in enabled drivers build config 00:09:04.322 net/ice: not in enabled drivers build config 00:09:04.322 net/idpf: not in enabled drivers build config 00:09:04.322 net/igc: not in enabled drivers build config 00:09:04.322 net/ionic: not in enabled drivers build config 00:09:04.322 net/ipn3ke: not in enabled drivers build config 00:09:04.322 net/ixgbe: not in enabled drivers build config 00:09:04.322 net/mana: not in enabled drivers build config 00:09:04.322 net/memif: not in enabled drivers build config 00:09:04.322 net/mlx4: not in enabled drivers build config 00:09:04.322 net/mlx5: not in enabled drivers build config 00:09:04.322 net/mvneta: not in enabled drivers build config 00:09:04.322 net/mvpp2: not in enabled drivers build config 00:09:04.322 net/netvsc: not in enabled drivers build config 00:09:04.322 net/nfb: not in enabled drivers build config 00:09:04.322 net/nfp: not in enabled drivers build config 00:09:04.322 net/ngbe: not in enabled drivers build config 00:09:04.322 net/null: not in enabled drivers build config 00:09:04.322 net/octeontx: not in enabled drivers build config 00:09:04.322 net/octeon_ep: not in enabled drivers build config 00:09:04.322 net/pcap: not in enabled drivers build config 00:09:04.322 net/pfe: not in enabled drivers build config 00:09:04.322 net/qede: not in enabled drivers build config 00:09:04.322 net/ring: not in enabled drivers build config 00:09:04.322 net/sfc: not in enabled drivers build config 00:09:04.322 net/softnic: not in enabled drivers build config 00:09:04.322 net/tap: not in enabled drivers build config 00:09:04.322 net/thunderx: not in enabled drivers build config 00:09:04.322 net/txgbe: not in enabled drivers build config 00:09:04.322 net/vdev_netvsc: not in enabled drivers build config 00:09:04.322 net/vhost: not in enabled drivers build config 00:09:04.322 net/virtio: not in enabled drivers build config 00:09:04.322 net/vmxnet3: not in enabled drivers build config 00:09:04.322 raw/*: missing internal dependency, "rawdev" 00:09:04.322 crypto/armv8: not in enabled drivers build config 00:09:04.322 crypto/bcmfs: not in enabled drivers build config 00:09:04.322 crypto/caam_jr: not in enabled drivers build config 00:09:04.322 crypto/ccp: not in enabled drivers build config 00:09:04.322 crypto/cnxk: not in enabled drivers build config 00:09:04.322 crypto/dpaa_sec: not in enabled drivers build config 00:09:04.322 crypto/dpaa2_sec: not in enabled drivers build config 00:09:04.322 crypto/ipsec_mb: not in enabled drivers build config 00:09:04.322 crypto/mlx5: not in enabled drivers build config 00:09:04.322 crypto/mvsam: not in enabled drivers build config 00:09:04.322 crypto/nitrox: not in enabled drivers build config 00:09:04.322 crypto/null: not in enabled drivers build config 00:09:04.322 crypto/octeontx: not in enabled drivers build config 00:09:04.322 crypto/openssl: not in enabled drivers build config 00:09:04.322 crypto/scheduler: not in enabled drivers build config 00:09:04.322 crypto/uadk: not in enabled drivers build config 00:09:04.322 crypto/virtio: not in enabled drivers build config 00:09:04.322 compress/isal: not in enabled drivers build config 00:09:04.322 compress/mlx5: not in enabled drivers build config 00:09:04.322 compress/nitrox: not in enabled drivers build config 00:09:04.322 compress/octeontx: not in enabled drivers build config 00:09:04.322 compress/zlib: not in enabled drivers build config 00:09:04.322 regex/*: missing internal dependency, "regexdev" 00:09:04.322 ml/*: missing internal dependency, "mldev" 00:09:04.322 vdpa/ifc: not in enabled drivers build config 00:09:04.322 vdpa/mlx5: not in enabled drivers build config 00:09:04.322 vdpa/nfp: not in enabled drivers build config 00:09:04.322 vdpa/sfc: not in enabled drivers build config 00:09:04.322 event/*: missing internal dependency, "eventdev" 00:09:04.322 baseband/*: missing internal dependency, "bbdev" 00:09:04.322 gpu/*: missing internal dependency, "gpudev" 00:09:04.322 00:09:04.322 00:09:04.890 Build targets in project: 85 00:09:04.890 00:09:04.890 DPDK 24.03.0 00:09:04.890 00:09:04.890 User defined options 00:09:04.890 buildtype : debug 00:09:04.890 default_library : shared 00:09:04.890 libdir : lib 00:09:04.890 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:04.890 b_sanitize : address 00:09:04.890 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:04.890 c_link_args : 00:09:04.890 cpu_instruction_set: native 00:09:04.890 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:09:04.890 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:09:04.890 enable_docs : false 00:09:04.891 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:09:04.891 enable_kmods : false 00:09:04.891 max_lcores : 128 00:09:04.891 tests : false 00:09:04.891 00:09:04.891 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:05.827 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:09:06.086 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:06.086 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:06.086 [3/268] Linking static target lib/librte_kvargs.a 00:09:06.086 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:06.345 [5/268] Linking static target lib/librte_log.a 00:09:06.345 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:06.604 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:06.604 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:06.604 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:06.863 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:06.863 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:06.863 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:06.863 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:07.122 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:07.122 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:07.381 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:07.381 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:07.381 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:07.381 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:07.641 [20/268] Linking static target lib/librte_telemetry.a 00:09:07.641 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:07.641 [22/268] Linking target lib/librte_log.so.24.1 00:09:07.641 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:07.641 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:07.902 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:07.902 [26/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:09:07.902 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:08.161 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:08.161 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:08.161 [30/268] Linking target lib/librte_kvargs.so.24.1 00:09:08.161 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:08.161 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:08.524 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:08.524 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:08.524 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:09:08.524 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:08.524 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:08.797 [38/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:08.797 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:08.797 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:08.797 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:08.797 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:08.797 [43/268] Linking target lib/librte_telemetry.so.24.1 00:09:08.797 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:08.797 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:09.055 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:09.055 [47/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:09:09.313 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:09.314 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:09.314 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:09.314 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:09.314 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:09.314 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:09.572 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:09.572 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:09.572 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:09.830 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:09.830 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:09.830 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:09.830 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:09.830 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:09.830 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:10.089 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:10.089 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:10.089 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:10.089 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:10.347 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:10.347 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:10.605 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:10.605 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:10.605 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:10.605 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:10.605 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:10.605 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:10.605 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:10.864 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:10.864 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:10.864 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:11.122 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:11.122 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:11.122 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:11.122 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:11.122 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:11.380 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:11.380 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:11.380 [86/268] Linking static target lib/librte_eal.a 00:09:11.380 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:11.380 [88/268] Linking static target lib/librte_ring.a 00:09:11.380 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:11.639 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:11.639 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:11.639 [92/268] Linking static target lib/librte_mempool.a 00:09:11.639 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:11.898 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:11.898 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:11.898 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:11.898 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:11.898 [98/268] Linking static target lib/librte_rcu.a 00:09:12.156 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:12.156 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:12.414 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:12.414 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:12.672 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:12.672 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:12.672 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:12.672 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:12.672 [107/268] Linking static target lib/librte_net.a 00:09:12.672 [108/268] Linking static target lib/librte_meter.a 00:09:12.672 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:12.672 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:12.672 [111/268] Linking static target lib/librte_mbuf.a 00:09:12.931 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.189 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:13.189 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:13.189 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.189 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.448 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:13.448 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:13.707 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:13.707 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:13.965 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:14.223 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:14.481 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:14.481 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:14.481 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:14.481 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:14.481 [127/268] Linking static target lib/librte_pci.a 00:09:14.740 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:14.740 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:14.740 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:14.999 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:14.999 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:09:14.999 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:14.999 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:15.257 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:15.257 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:15.257 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:15.516 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:15.516 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:15.516 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:15.516 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:15.516 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:15.516 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:15.516 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:09:16.082 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:16.082 [146/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:16.082 [147/268] Linking static target lib/librte_timer.a 00:09:16.082 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:16.340 [149/268] Linking static target lib/librte_cmdline.a 00:09:16.340 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:16.619 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:09:16.876 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:17.134 [153/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.134 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:17.134 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:17.134 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:17.391 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:17.649 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:17.649 [159/268] Linking static target lib/librte_compressdev.a 00:09:17.649 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:17.649 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:17.649 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:17.907 [163/268] Linking static target lib/librte_ethdev.a 00:09:18.165 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:18.165 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:18.423 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:18.423 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:18.423 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:18.423 [169/268] Linking static target lib/librte_hash.a 00:09:18.423 [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:18.423 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:18.680 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:18.680 [173/268] Linking static target lib/librte_dmadev.a 00:09:18.937 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:19.195 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:19.195 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:19.195 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.195 [178/268] Linking static target lib/librte_cryptodev.a 00:09:19.195 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:19.453 [180/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:19.453 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:19.453 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:19.712 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:19.712 [184/268] Linking static target lib/librte_power.a 00:09:19.712 [185/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.971 [186/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:19.971 [187/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.971 [188/268] Linking static target lib/librte_reorder.a 00:09:20.229 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:20.229 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:20.229 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:20.229 [192/268] Linking static target lib/librte_security.a 00:09:20.487 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:20.744 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.310 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.310 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:21.310 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:21.310 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:21.568 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:21.568 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:21.827 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:22.086 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:22.086 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:22.086 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:22.356 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:22.356 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:22.356 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:22.356 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:22.356 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:22.630 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:22.630 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:22.889 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:22.889 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:22.889 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:22.889 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:22.889 [216/268] Linking static target drivers/librte_bus_pci.a 00:09:22.889 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:22.889 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:22.889 [219/268] Linking static target drivers/librte_bus_vdev.a 00:09:22.889 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:22.889 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:23.147 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:23.148 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:23.148 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:23.148 [225/268] Linking static target drivers/librte_mempool_ring.a 00:09:23.406 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:23.665 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:23.922 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:27.203 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:27.203 [230/268] Linking target lib/librte_eal.so.24.1 00:09:27.203 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:09:27.203 [232/268] Linking target lib/librte_ring.so.24.1 00:09:27.203 [233/268] Linking target lib/librte_pci.so.24.1 00:09:27.203 [234/268] Linking target lib/librte_meter.so.24.1 00:09:27.203 [235/268] Linking target lib/librte_dmadev.so.24.1 00:09:27.203 [236/268] Linking target lib/librte_timer.so.24.1 00:09:27.203 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:09:27.203 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:09:27.203 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:09:27.203 [240/268] Linking target lib/librte_rcu.so.24.1 00:09:27.203 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:09:27.203 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:09:27.203 [243/268] Linking target lib/librte_mempool.so.24.1 00:09:27.203 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:09:27.203 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:09:27.203 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:09:27.203 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:09:27.461 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:09:27.461 [249/268] Linking target lib/librte_mbuf.so.24.1 00:09:27.461 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:09:27.719 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:09:27.719 [252/268] Linking target lib/librte_compressdev.so.24.1 00:09:27.719 [253/268] Linking target lib/librte_net.so.24.1 00:09:27.719 [254/268] Linking target lib/librte_reorder.so.24.1 00:09:27.719 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:09:27.719 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:09:27.719 [257/268] Linking target lib/librte_hash.so.24.1 00:09:27.719 [258/268] Linking target lib/librte_security.so.24.1 00:09:27.719 [259/268] Linking target lib/librte_cmdline.so.24.1 00:09:27.978 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:09:28.542 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:28.542 [262/268] Linking target lib/librte_ethdev.so.24.1 00:09:28.800 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:09:28.800 [264/268] Linking target lib/librte_power.so.24.1 00:09:29.056 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:29.056 [266/268] Linking static target lib/librte_vhost.a 00:09:30.962 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:30.962 [268/268] Linking target lib/librte_vhost.so.24.1 00:09:30.962 INFO: autodetecting backend as ninja 00:09:30.962 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:52.890 CC lib/ut/ut.o 00:09:52.890 CC lib/ut_mock/mock.o 00:09:52.890 CC lib/log/log.o 00:09:52.890 CC lib/log/log_flags.o 00:09:52.890 CC lib/log/log_deprecated.o 00:09:52.890 LIB libspdk_ut.a 00:09:52.890 LIB libspdk_ut_mock.a 00:09:52.890 SO libspdk_ut.so.2.0 00:09:52.890 SO libspdk_ut_mock.so.6.0 00:09:52.890 LIB libspdk_log.a 00:09:52.890 SYMLINK libspdk_ut.so 00:09:52.890 SYMLINK libspdk_ut_mock.so 00:09:52.890 SO libspdk_log.so.7.1 00:09:52.890 SYMLINK libspdk_log.so 00:09:52.890 CC lib/util/base64.o 00:09:52.890 CC lib/util/bit_array.o 00:09:52.890 CC lib/util/crc16.o 00:09:52.890 CC lib/ioat/ioat.o 00:09:52.890 CC lib/util/cpuset.o 00:09:52.890 CC lib/util/crc32.o 00:09:52.890 CC lib/util/crc32c.o 00:09:52.890 CC lib/dma/dma.o 00:09:52.890 CXX lib/trace_parser/trace.o 00:09:52.890 CC lib/vfio_user/host/vfio_user_pci.o 00:09:52.890 CC lib/util/crc32_ieee.o 00:09:52.890 CC lib/util/crc64.o 00:09:52.890 CC lib/util/dif.o 00:09:52.890 CC lib/util/fd.o 00:09:52.890 LIB libspdk_dma.a 00:09:52.890 CC lib/vfio_user/host/vfio_user.o 00:09:52.890 SO libspdk_dma.so.5.0 00:09:53.149 CC lib/util/fd_group.o 00:09:53.149 CC lib/util/file.o 00:09:53.149 SYMLINK libspdk_dma.so 00:09:53.149 CC lib/util/hexlify.o 00:09:53.149 CC lib/util/iov.o 00:09:53.149 CC lib/util/math.o 00:09:53.149 LIB libspdk_ioat.a 00:09:53.149 CC lib/util/net.o 00:09:53.149 SO libspdk_ioat.so.7.0 00:09:53.149 CC lib/util/pipe.o 00:09:53.149 CC lib/util/strerror_tls.o 00:09:53.149 LIB libspdk_vfio_user.a 00:09:53.407 CC lib/util/string.o 00:09:53.407 SO libspdk_vfio_user.so.5.0 00:09:53.407 CC lib/util/uuid.o 00:09:53.407 SYMLINK libspdk_ioat.so 00:09:53.407 CC lib/util/xor.o 00:09:53.407 SYMLINK libspdk_vfio_user.so 00:09:53.407 CC lib/util/zipf.o 00:09:53.407 CC lib/util/md5.o 00:09:53.975 LIB libspdk_util.a 00:09:53.975 LIB libspdk_trace_parser.a 00:09:53.975 SO libspdk_util.so.10.1 00:09:53.975 SO libspdk_trace_parser.so.6.0 00:09:53.975 SYMLINK libspdk_trace_parser.so 00:09:54.234 SYMLINK libspdk_util.so 00:09:54.494 CC lib/conf/conf.o 00:09:54.494 CC lib/vmd/vmd.o 00:09:54.494 CC lib/vmd/led.o 00:09:54.494 CC lib/env_dpdk/env.o 00:09:54.494 CC lib/env_dpdk/pci.o 00:09:54.494 CC lib/env_dpdk/memory.o 00:09:54.494 CC lib/rdma_utils/rdma_utils.o 00:09:54.494 CC lib/env_dpdk/init.o 00:09:54.494 CC lib/json/json_parse.o 00:09:54.494 CC lib/idxd/idxd.o 00:09:54.494 CC lib/env_dpdk/threads.o 00:09:54.756 CC lib/env_dpdk/pci_ioat.o 00:09:54.756 LIB libspdk_conf.a 00:09:54.756 SO libspdk_conf.so.6.0 00:09:54.756 CC lib/json/json_util.o 00:09:55.015 LIB libspdk_rdma_utils.a 00:09:55.015 SYMLINK libspdk_conf.so 00:09:55.015 CC lib/json/json_write.o 00:09:55.015 SO libspdk_rdma_utils.so.1.0 00:09:55.015 CC lib/env_dpdk/pci_virtio.o 00:09:55.015 SYMLINK libspdk_rdma_utils.so 00:09:55.015 CC lib/idxd/idxd_user.o 00:09:55.015 CC lib/idxd/idxd_kernel.o 00:09:55.275 CC lib/env_dpdk/pci_vmd.o 00:09:55.275 CC lib/env_dpdk/pci_idxd.o 00:09:55.275 CC lib/env_dpdk/pci_event.o 00:09:55.275 CC lib/env_dpdk/sigbus_handler.o 00:09:55.275 LIB libspdk_json.a 00:09:55.275 CC lib/env_dpdk/pci_dpdk.o 00:09:55.275 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:55.275 SO libspdk_json.so.6.0 00:09:55.533 LIB libspdk_idxd.a 00:09:55.533 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:55.533 SYMLINK libspdk_json.so 00:09:55.533 SO libspdk_idxd.so.12.1 00:09:55.533 LIB libspdk_vmd.a 00:09:55.533 SO libspdk_vmd.so.6.0 00:09:55.533 SYMLINK libspdk_idxd.so 00:09:55.533 CC lib/rdma_provider/common.o 00:09:55.533 CC lib/rdma_provider/rdma_provider_verbs.o 00:09:55.533 SYMLINK libspdk_vmd.so 00:09:55.792 CC lib/jsonrpc/jsonrpc_server.o 00:09:55.792 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:55.792 CC lib/jsonrpc/jsonrpc_client.o 00:09:55.792 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:56.050 LIB libspdk_rdma_provider.a 00:09:56.050 SO libspdk_rdma_provider.so.7.0 00:09:56.050 SYMLINK libspdk_rdma_provider.so 00:09:56.316 LIB libspdk_jsonrpc.a 00:09:56.316 SO libspdk_jsonrpc.so.6.0 00:09:56.316 SYMLINK libspdk_jsonrpc.so 00:09:56.575 LIB libspdk_env_dpdk.a 00:09:56.834 CC lib/rpc/rpc.o 00:09:56.834 SO libspdk_env_dpdk.so.15.1 00:09:57.091 SYMLINK libspdk_env_dpdk.so 00:09:57.091 LIB libspdk_rpc.a 00:09:57.091 SO libspdk_rpc.so.6.0 00:09:57.349 SYMLINK libspdk_rpc.so 00:09:57.608 CC lib/keyring/keyring_rpc.o 00:09:57.608 CC lib/keyring/keyring.o 00:09:57.608 CC lib/notify/notify.o 00:09:57.608 CC lib/notify/notify_rpc.o 00:09:57.608 CC lib/trace/trace.o 00:09:57.608 CC lib/trace/trace_rpc.o 00:09:57.608 CC lib/trace/trace_flags.o 00:09:57.875 LIB libspdk_notify.a 00:09:57.875 SO libspdk_notify.so.6.0 00:09:57.875 LIB libspdk_keyring.a 00:09:57.875 SO libspdk_keyring.so.2.0 00:09:57.875 LIB libspdk_trace.a 00:09:57.875 SYMLINK libspdk_notify.so 00:09:57.875 SO libspdk_trace.so.11.0 00:09:58.135 SYMLINK libspdk_keyring.so 00:09:58.135 SYMLINK libspdk_trace.so 00:09:58.392 CC lib/thread/thread.o 00:09:58.392 CC lib/thread/iobuf.o 00:09:58.392 CC lib/sock/sock.o 00:09:58.392 CC lib/sock/sock_rpc.o 00:09:58.969 LIB libspdk_sock.a 00:09:58.969 SO libspdk_sock.so.10.0 00:09:58.969 SYMLINK libspdk_sock.so 00:09:59.537 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:59.537 CC lib/nvme/nvme_ctrlr.o 00:09:59.537 CC lib/nvme/nvme_ns_cmd.o 00:09:59.537 CC lib/nvme/nvme_ns.o 00:09:59.537 CC lib/nvme/nvme_pcie_common.o 00:09:59.537 CC lib/nvme/nvme_fabric.o 00:09:59.537 CC lib/nvme/nvme_pcie.o 00:09:59.537 CC lib/nvme/nvme_qpair.o 00:09:59.537 CC lib/nvme/nvme.o 00:10:00.105 CC lib/nvme/nvme_quirks.o 00:10:00.364 CC lib/nvme/nvme_transport.o 00:10:00.624 CC lib/nvme/nvme_discovery.o 00:10:00.624 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:00.624 LIB libspdk_thread.a 00:10:00.624 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:00.624 CC lib/nvme/nvme_tcp.o 00:10:00.624 SO libspdk_thread.so.11.0 00:10:00.883 CC lib/nvme/nvme_opal.o 00:10:00.883 SYMLINK libspdk_thread.so 00:10:00.883 CC lib/nvme/nvme_io_msg.o 00:10:01.142 CC lib/accel/accel.o 00:10:01.142 CC lib/accel/accel_rpc.o 00:10:01.400 CC lib/accel/accel_sw.o 00:10:01.400 CC lib/blob/blobstore.o 00:10:01.659 CC lib/nvme/nvme_poll_group.o 00:10:01.659 CC lib/nvme/nvme_zns.o 00:10:01.659 CC lib/init/json_config.o 00:10:01.659 CC lib/fsdev/fsdev.o 00:10:01.659 CC lib/virtio/virtio.o 00:10:01.918 CC lib/init/subsystem.o 00:10:01.918 CC lib/init/subsystem_rpc.o 00:10:02.253 CC lib/nvme/nvme_stubs.o 00:10:02.253 CC lib/init/rpc.o 00:10:02.524 CC lib/blob/request.o 00:10:02.524 CC lib/virtio/virtio_vhost_user.o 00:10:02.524 CC lib/virtio/virtio_vfio_user.o 00:10:02.524 LIB libspdk_init.a 00:10:02.785 SO libspdk_init.so.6.0 00:10:02.785 CC lib/nvme/nvme_auth.o 00:10:02.785 CC lib/nvme/nvme_cuse.o 00:10:02.785 CC lib/fsdev/fsdev_io.o 00:10:02.785 CC lib/virtio/virtio_pci.o 00:10:02.785 LIB libspdk_accel.a 00:10:02.785 SYMLINK libspdk_init.so 00:10:02.785 CC lib/fsdev/fsdev_rpc.o 00:10:02.785 CC lib/blob/zeroes.o 00:10:03.044 SO libspdk_accel.so.16.0 00:10:03.044 SYMLINK libspdk_accel.so 00:10:03.044 CC lib/blob/blob_bs_dev.o 00:10:03.302 LIB libspdk_virtio.a 00:10:03.302 SO libspdk_virtio.so.7.0 00:10:03.302 CC lib/event/app.o 00:10:03.302 LIB libspdk_fsdev.a 00:10:03.302 SO libspdk_fsdev.so.2.0 00:10:03.302 SYMLINK libspdk_virtio.so 00:10:03.302 CC lib/event/reactor.o 00:10:03.302 CC lib/nvme/nvme_rdma.o 00:10:03.560 CC lib/event/log_rpc.o 00:10:03.560 CC lib/bdev/bdev.o 00:10:03.560 SYMLINK libspdk_fsdev.so 00:10:03.560 CC lib/bdev/bdev_rpc.o 00:10:03.560 CC lib/event/app_rpc.o 00:10:03.818 CC lib/bdev/bdev_zone.o 00:10:04.076 CC lib/event/scheduler_static.o 00:10:04.076 CC lib/bdev/part.o 00:10:04.076 CC lib/bdev/scsi_nvme.o 00:10:04.334 LIB libspdk_event.a 00:10:04.334 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:10:04.593 SO libspdk_event.so.14.0 00:10:04.593 SYMLINK libspdk_event.so 00:10:05.528 LIB libspdk_nvme.a 00:10:05.528 LIB libspdk_fuse_dispatcher.a 00:10:05.528 SO libspdk_fuse_dispatcher.so.1.0 00:10:05.528 SYMLINK libspdk_fuse_dispatcher.so 00:10:05.528 SO libspdk_nvme.so.15.0 00:10:06.094 SYMLINK libspdk_nvme.so 00:10:06.354 LIB libspdk_blob.a 00:10:06.354 SO libspdk_blob.so.12.0 00:10:06.613 SYMLINK libspdk_blob.so 00:10:06.872 CC lib/lvol/lvol.o 00:10:06.872 CC lib/blobfs/blobfs.o 00:10:06.872 CC lib/blobfs/tree.o 00:10:07.436 LIB libspdk_bdev.a 00:10:07.695 SO libspdk_bdev.so.17.0 00:10:07.695 SYMLINK libspdk_bdev.so 00:10:07.956 LIB libspdk_blobfs.a 00:10:07.956 CC lib/nbd/nbd_rpc.o 00:10:07.956 CC lib/nbd/nbd.o 00:10:07.956 CC lib/nvmf/ctrlr.o 00:10:07.956 SO libspdk_blobfs.so.11.0 00:10:07.956 CC lib/nvmf/ctrlr_discovery.o 00:10:07.956 CC lib/nvmf/ctrlr_bdev.o 00:10:08.217 CC lib/scsi/dev.o 00:10:08.217 CC lib/ftl/ftl_core.o 00:10:08.217 CC lib/ublk/ublk.o 00:10:08.217 SYMLINK libspdk_blobfs.so 00:10:08.218 CC lib/ftl/ftl_init.o 00:10:08.218 CC lib/ftl/ftl_layout.o 00:10:08.476 CC lib/scsi/lun.o 00:10:08.476 LIB libspdk_lvol.a 00:10:08.476 SO libspdk_lvol.so.11.0 00:10:08.476 CC lib/ublk/ublk_rpc.o 00:10:08.477 SYMLINK libspdk_lvol.so 00:10:08.477 CC lib/scsi/port.o 00:10:08.477 LIB libspdk_nbd.a 00:10:08.735 CC lib/ftl/ftl_debug.o 00:10:08.735 SO libspdk_nbd.so.7.0 00:10:08.735 CC lib/ftl/ftl_io.o 00:10:08.735 CC lib/nvmf/subsystem.o 00:10:08.735 CC lib/nvmf/nvmf.o 00:10:08.735 SYMLINK libspdk_nbd.so 00:10:08.735 CC lib/ftl/ftl_sb.o 00:10:08.735 CC lib/scsi/scsi.o 00:10:08.735 CC lib/scsi/scsi_bdev.o 00:10:08.994 CC lib/ftl/ftl_l2p.o 00:10:08.994 LIB libspdk_ublk.a 00:10:08.994 CC lib/ftl/ftl_l2p_flat.o 00:10:08.994 SO libspdk_ublk.so.3.0 00:10:08.994 CC lib/nvmf/nvmf_rpc.o 00:10:08.994 CC lib/ftl/ftl_nv_cache.o 00:10:08.994 SYMLINK libspdk_ublk.so 00:10:08.994 CC lib/ftl/ftl_band.o 00:10:09.253 CC lib/ftl/ftl_band_ops.o 00:10:09.253 CC lib/ftl/ftl_writer.o 00:10:09.512 CC lib/ftl/ftl_rq.o 00:10:09.512 CC lib/scsi/scsi_pr.o 00:10:09.512 CC lib/ftl/ftl_reloc.o 00:10:09.512 CC lib/ftl/ftl_l2p_cache.o 00:10:09.512 CC lib/ftl/ftl_p2l.o 00:10:09.512 CC lib/ftl/ftl_p2l_log.o 00:10:09.845 CC lib/nvmf/transport.o 00:10:09.845 CC lib/scsi/scsi_rpc.o 00:10:10.119 CC lib/scsi/task.o 00:10:10.119 CC lib/ftl/mngt/ftl_mngt.o 00:10:10.119 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:10.119 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:10.119 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:10.119 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:10.119 LIB libspdk_scsi.a 00:10:10.119 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:10.378 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:10.378 SO libspdk_scsi.so.9.0 00:10:10.378 CC lib/nvmf/tcp.o 00:10:10.378 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:10.378 CC lib/nvmf/stubs.o 00:10:10.378 CC lib/nvmf/mdns_server.o 00:10:10.378 SYMLINK libspdk_scsi.so 00:10:10.378 CC lib/nvmf/rdma.o 00:10:10.637 CC lib/nvmf/auth.o 00:10:10.637 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:10.637 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:10.637 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:10.637 CC lib/iscsi/conn.o 00:10:10.637 CC lib/vhost/vhost.o 00:10:10.895 CC lib/iscsi/init_grp.o 00:10:10.895 CC lib/iscsi/iscsi.o 00:10:10.895 CC lib/iscsi/param.o 00:10:10.895 CC lib/iscsi/portal_grp.o 00:10:11.154 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:11.154 CC lib/iscsi/tgt_node.o 00:10:11.154 CC lib/iscsi/iscsi_subsystem.o 00:10:11.414 CC lib/iscsi/iscsi_rpc.o 00:10:11.414 CC lib/iscsi/task.o 00:10:11.414 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:11.414 CC lib/vhost/vhost_rpc.o 00:10:11.673 CC lib/vhost/vhost_scsi.o 00:10:11.673 CC lib/vhost/vhost_blk.o 00:10:11.673 CC lib/vhost/rte_vhost_user.o 00:10:11.673 CC lib/ftl/utils/ftl_conf.o 00:10:11.673 CC lib/ftl/utils/ftl_md.o 00:10:11.931 CC lib/ftl/utils/ftl_mempool.o 00:10:11.931 CC lib/ftl/utils/ftl_bitmap.o 00:10:12.192 CC lib/ftl/utils/ftl_property.o 00:10:12.192 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:12.192 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:12.192 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:12.450 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:12.450 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:12.450 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:12.451 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:10:12.451 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:12.451 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:12.709 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:12.709 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:12.709 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:10:12.709 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:10:12.709 LIB libspdk_iscsi.a 00:10:12.709 CC lib/ftl/base/ftl_base_dev.o 00:10:12.709 CC lib/ftl/base/ftl_base_bdev.o 00:10:12.709 CC lib/ftl/ftl_trace.o 00:10:12.968 SO libspdk_iscsi.so.8.0 00:10:12.968 LIB libspdk_vhost.a 00:10:12.968 SO libspdk_vhost.so.8.0 00:10:12.968 SYMLINK libspdk_iscsi.so 00:10:12.968 LIB libspdk_nvmf.a 00:10:12.968 LIB libspdk_ftl.a 00:10:12.968 SYMLINK libspdk_vhost.so 00:10:13.227 SO libspdk_nvmf.so.20.0 00:10:13.498 SO libspdk_ftl.so.9.0 00:10:13.498 SYMLINK libspdk_nvmf.so 00:10:13.758 SYMLINK libspdk_ftl.so 00:10:14.454 CC module/env_dpdk/env_dpdk_rpc.o 00:10:14.454 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:14.454 CC module/accel/dsa/accel_dsa.o 00:10:14.454 CC module/accel/error/accel_error.o 00:10:14.454 CC module/fsdev/aio/fsdev_aio.o 00:10:14.454 CC module/accel/ioat/accel_ioat.o 00:10:14.454 CC module/sock/posix/posix.o 00:10:14.454 CC module/blob/bdev/blob_bdev.o 00:10:14.454 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:14.454 CC module/keyring/file/keyring.o 00:10:14.454 LIB libspdk_env_dpdk_rpc.a 00:10:14.454 LIB libspdk_scheduler_dpdk_governor.a 00:10:14.454 CC module/accel/ioat/accel_ioat_rpc.o 00:10:14.454 SO libspdk_env_dpdk_rpc.so.6.0 00:10:14.454 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:14.454 CC module/keyring/file/keyring_rpc.o 00:10:14.713 CC module/accel/error/accel_error_rpc.o 00:10:14.713 SYMLINK libspdk_env_dpdk_rpc.so 00:10:14.713 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:14.713 CC module/fsdev/aio/fsdev_aio_rpc.o 00:10:14.713 CC module/fsdev/aio/linux_aio_mgr.o 00:10:14.713 CC module/accel/dsa/accel_dsa_rpc.o 00:10:14.713 LIB libspdk_accel_ioat.a 00:10:14.713 LIB libspdk_scheduler_dynamic.a 00:10:14.713 SO libspdk_accel_ioat.so.6.0 00:10:14.713 SO libspdk_scheduler_dynamic.so.4.0 00:10:14.713 LIB libspdk_keyring_file.a 00:10:14.713 LIB libspdk_blob_bdev.a 00:10:14.713 LIB libspdk_accel_error.a 00:10:14.713 SO libspdk_keyring_file.so.2.0 00:10:14.972 SO libspdk_blob_bdev.so.12.0 00:10:14.972 SYMLINK libspdk_scheduler_dynamic.so 00:10:14.972 SO libspdk_accel_error.so.2.0 00:10:14.972 SYMLINK libspdk_accel_ioat.so 00:10:14.972 SYMLINK libspdk_keyring_file.so 00:10:14.972 SYMLINK libspdk_blob_bdev.so 00:10:14.972 SYMLINK libspdk_accel_error.so 00:10:14.972 LIB libspdk_accel_dsa.a 00:10:14.972 SO libspdk_accel_dsa.so.5.0 00:10:14.972 CC module/accel/iaa/accel_iaa.o 00:10:14.972 CC module/scheduler/gscheduler/gscheduler.o 00:10:14.972 SYMLINK libspdk_accel_dsa.so 00:10:14.972 CC module/accel/iaa/accel_iaa_rpc.o 00:10:15.231 CC module/keyring/linux/keyring.o 00:10:15.231 CC module/bdev/delay/vbdev_delay.o 00:10:15.231 CC module/blobfs/bdev/blobfs_bdev.o 00:10:15.231 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:15.231 CC module/bdev/error/vbdev_error.o 00:10:15.231 CC module/bdev/gpt/gpt.o 00:10:15.231 CC module/keyring/linux/keyring_rpc.o 00:10:15.231 LIB libspdk_accel_iaa.a 00:10:15.492 SO libspdk_accel_iaa.so.3.0 00:10:15.492 LIB libspdk_scheduler_gscheduler.a 00:10:15.492 LIB libspdk_keyring_linux.a 00:10:15.492 CC module/bdev/gpt/vbdev_gpt.o 00:10:15.492 SO libspdk_scheduler_gscheduler.so.4.0 00:10:15.492 SO libspdk_keyring_linux.so.1.0 00:10:15.492 LIB libspdk_sock_posix.a 00:10:15.492 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:15.492 SYMLINK libspdk_accel_iaa.so 00:10:15.492 SO libspdk_sock_posix.so.6.0 00:10:15.492 SYMLINK libspdk_keyring_linux.so 00:10:15.492 SYMLINK libspdk_scheduler_gscheduler.so 00:10:15.492 CC module/bdev/error/vbdev_error_rpc.o 00:10:15.751 SYMLINK libspdk_sock_posix.so 00:10:15.751 LIB libspdk_fsdev_aio.a 00:10:15.751 LIB libspdk_bdev_delay.a 00:10:15.751 CC module/bdev/lvol/vbdev_lvol.o 00:10:15.751 CC module/bdev/malloc/bdev_malloc.o 00:10:15.751 SO libspdk_fsdev_aio.so.1.0 00:10:15.751 SO libspdk_bdev_delay.so.6.0 00:10:15.751 LIB libspdk_bdev_gpt.a 00:10:15.751 CC module/bdev/null/bdev_null.o 00:10:15.751 LIB libspdk_blobfs_bdev.a 00:10:15.751 CC module/bdev/nvme/bdev_nvme.o 00:10:15.751 SO libspdk_bdev_gpt.so.6.0 00:10:15.751 SYMLINK libspdk_fsdev_aio.so 00:10:15.751 SO libspdk_blobfs_bdev.so.6.0 00:10:16.008 CC module/bdev/passthru/vbdev_passthru.o 00:10:16.008 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:16.008 LIB libspdk_bdev_error.a 00:10:16.008 SYMLINK libspdk_bdev_gpt.so 00:10:16.008 SYMLINK libspdk_bdev_delay.so 00:10:16.008 CC module/bdev/null/bdev_null_rpc.o 00:10:16.008 SO libspdk_bdev_error.so.6.0 00:10:16.008 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:16.008 SYMLINK libspdk_blobfs_bdev.so 00:10:16.008 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:16.008 SYMLINK libspdk_bdev_error.so 00:10:16.008 CC module/bdev/nvme/nvme_rpc.o 00:10:16.267 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:16.267 LIB libspdk_bdev_null.a 00:10:16.267 CC module/bdev/nvme/bdev_mdns_client.o 00:10:16.267 CC module/bdev/nvme/vbdev_opal.o 00:10:16.267 SO libspdk_bdev_null.so.6.0 00:10:16.267 LIB libspdk_bdev_malloc.a 00:10:16.267 LIB libspdk_bdev_passthru.a 00:10:16.267 SO libspdk_bdev_malloc.so.6.0 00:10:16.267 SO libspdk_bdev_passthru.so.6.0 00:10:16.526 SYMLINK libspdk_bdev_null.so 00:10:16.526 SYMLINK libspdk_bdev_malloc.so 00:10:16.526 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:16.526 SYMLINK libspdk_bdev_passthru.so 00:10:16.526 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:16.786 LIB libspdk_bdev_lvol.a 00:10:16.786 CC module/bdev/raid/bdev_raid.o 00:10:16.786 CC module/bdev/split/vbdev_split.o 00:10:16.786 CC module/bdev/split/vbdev_split_rpc.o 00:10:16.786 SO libspdk_bdev_lvol.so.6.0 00:10:16.786 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:16.786 SYMLINK libspdk_bdev_lvol.so 00:10:16.786 CC module/bdev/xnvme/bdev_xnvme.o 00:10:16.786 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:10:16.786 CC module/bdev/raid/bdev_raid_rpc.o 00:10:17.045 CC module/bdev/raid/bdev_raid_sb.o 00:10:17.045 CC module/bdev/raid/raid0.o 00:10:17.045 CC module/bdev/aio/bdev_aio.o 00:10:17.045 LIB libspdk_bdev_split.a 00:10:17.045 SO libspdk_bdev_split.so.6.0 00:10:17.045 CC module/bdev/raid/raid1.o 00:10:17.045 SYMLINK libspdk_bdev_split.so 00:10:17.045 CC module/bdev/raid/concat.o 00:10:17.304 CC module/bdev/aio/bdev_aio_rpc.o 00:10:17.304 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:17.563 LIB libspdk_bdev_xnvme.a 00:10:17.563 LIB libspdk_bdev_aio.a 00:10:17.563 SO libspdk_bdev_aio.so.6.0 00:10:17.563 SO libspdk_bdev_xnvme.so.3.0 00:10:17.563 LIB libspdk_bdev_zone_block.a 00:10:17.563 SYMLINK libspdk_bdev_aio.so 00:10:17.563 SYMLINK libspdk_bdev_xnvme.so 00:10:17.563 SO libspdk_bdev_zone_block.so.6.0 00:10:17.563 CC module/bdev/ftl/bdev_ftl.o 00:10:17.563 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:17.563 CC module/bdev/iscsi/bdev_iscsi.o 00:10:17.563 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:17.563 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:17.563 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:17.563 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:17.821 SYMLINK libspdk_bdev_zone_block.so 00:10:18.079 LIB libspdk_bdev_ftl.a 00:10:18.079 LIB libspdk_bdev_raid.a 00:10:18.079 SO libspdk_bdev_ftl.so.6.0 00:10:18.079 SO libspdk_bdev_raid.so.6.0 00:10:18.338 SYMLINK libspdk_bdev_ftl.so 00:10:18.338 LIB libspdk_bdev_iscsi.a 00:10:18.338 SYMLINK libspdk_bdev_raid.so 00:10:18.338 SO libspdk_bdev_iscsi.so.6.0 00:10:18.338 LIB libspdk_bdev_virtio.a 00:10:18.338 SYMLINK libspdk_bdev_iscsi.so 00:10:18.338 SO libspdk_bdev_virtio.so.6.0 00:10:18.598 SYMLINK libspdk_bdev_virtio.so 00:10:19.974 LIB libspdk_bdev_nvme.a 00:10:19.974 SO libspdk_bdev_nvme.so.7.1 00:10:20.232 SYMLINK libspdk_bdev_nvme.so 00:10:20.798 CC module/event/subsystems/fsdev/fsdev.o 00:10:20.798 CC module/event/subsystems/vmd/vmd.o 00:10:20.798 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:20.798 CC module/event/subsystems/sock/sock.o 00:10:20.798 CC module/event/subsystems/iobuf/iobuf.o 00:10:20.798 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:20.798 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:20.798 CC module/event/subsystems/keyring/keyring.o 00:10:20.798 CC module/event/subsystems/scheduler/scheduler.o 00:10:20.798 LIB libspdk_event_scheduler.a 00:10:20.798 LIB libspdk_event_fsdev.a 00:10:20.798 LIB libspdk_event_iobuf.a 00:10:20.798 LIB libspdk_event_vhost_blk.a 00:10:20.798 SO libspdk_event_scheduler.so.4.0 00:10:20.798 LIB libspdk_event_keyring.a 00:10:20.798 SO libspdk_event_fsdev.so.1.0 00:10:21.059 LIB libspdk_event_vmd.a 00:10:21.059 LIB libspdk_event_sock.a 00:10:21.059 SO libspdk_event_vhost_blk.so.3.0 00:10:21.059 SO libspdk_event_iobuf.so.3.0 00:10:21.059 SO libspdk_event_keyring.so.1.0 00:10:21.059 SO libspdk_event_vmd.so.6.0 00:10:21.059 SYMLINK libspdk_event_scheduler.so 00:10:21.059 SO libspdk_event_sock.so.5.0 00:10:21.059 SYMLINK libspdk_event_fsdev.so 00:10:21.059 SYMLINK libspdk_event_vhost_blk.so 00:10:21.059 SYMLINK libspdk_event_vmd.so 00:10:21.059 SYMLINK libspdk_event_iobuf.so 00:10:21.059 SYMLINK libspdk_event_sock.so 00:10:21.059 SYMLINK libspdk_event_keyring.so 00:10:21.319 CC module/event/subsystems/accel/accel.o 00:10:21.577 LIB libspdk_event_accel.a 00:10:21.577 SO libspdk_event_accel.so.6.0 00:10:21.836 SYMLINK libspdk_event_accel.so 00:10:22.094 CC module/event/subsystems/bdev/bdev.o 00:10:22.352 LIB libspdk_event_bdev.a 00:10:22.352 SO libspdk_event_bdev.so.6.0 00:10:22.352 SYMLINK libspdk_event_bdev.so 00:10:22.610 CC module/event/subsystems/scsi/scsi.o 00:10:22.610 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:22.610 CC module/event/subsystems/ublk/ublk.o 00:10:22.610 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:22.610 CC module/event/subsystems/nbd/nbd.o 00:10:22.870 LIB libspdk_event_ublk.a 00:10:22.870 LIB libspdk_event_nbd.a 00:10:22.870 SO libspdk_event_ublk.so.3.0 00:10:22.870 LIB libspdk_event_scsi.a 00:10:22.870 SO libspdk_event_nbd.so.6.0 00:10:22.870 SYMLINK libspdk_event_ublk.so 00:10:22.870 SO libspdk_event_scsi.so.6.0 00:10:23.130 SYMLINK libspdk_event_nbd.so 00:10:23.130 LIB libspdk_event_nvmf.a 00:10:23.130 SYMLINK libspdk_event_scsi.so 00:10:23.130 SO libspdk_event_nvmf.so.6.0 00:10:23.130 SYMLINK libspdk_event_nvmf.so 00:10:23.393 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:23.393 CC module/event/subsystems/iscsi/iscsi.o 00:10:23.661 LIB libspdk_event_vhost_scsi.a 00:10:23.661 LIB libspdk_event_iscsi.a 00:10:23.661 SO libspdk_event_vhost_scsi.so.3.0 00:10:23.661 SO libspdk_event_iscsi.so.6.0 00:10:23.661 SYMLINK libspdk_event_vhost_scsi.so 00:10:23.661 SYMLINK libspdk_event_iscsi.so 00:10:23.920 SO libspdk.so.6.0 00:10:23.920 SYMLINK libspdk.so 00:10:24.179 CC app/trace_record/trace_record.o 00:10:24.179 CC app/spdk_lspci/spdk_lspci.o 00:10:24.179 CXX app/trace/trace.o 00:10:24.179 CC app/spdk_nvme_perf/perf.o 00:10:24.179 CC app/spdk_nvme_identify/identify.o 00:10:24.438 CC app/iscsi_tgt/iscsi_tgt.o 00:10:24.438 CC app/nvmf_tgt/nvmf_main.o 00:10:24.438 CC app/spdk_tgt/spdk_tgt.o 00:10:24.438 CC test/thread/poller_perf/poller_perf.o 00:10:24.438 CC examples/util/zipf/zipf.o 00:10:24.438 LINK spdk_lspci 00:10:24.438 LINK nvmf_tgt 00:10:24.698 LINK iscsi_tgt 00:10:24.698 LINK spdk_trace_record 00:10:24.698 LINK spdk_tgt 00:10:24.698 LINK zipf 00:10:24.698 LINK poller_perf 00:10:24.698 CC app/spdk_nvme_discover/discovery_aer.o 00:10:24.956 LINK spdk_trace 00:10:24.956 CC app/spdk_top/spdk_top.o 00:10:24.956 CC app/spdk_dd/spdk_dd.o 00:10:24.956 LINK spdk_nvme_discover 00:10:25.214 CC app/fio/nvme/fio_plugin.o 00:10:25.214 CC test/dma/test_dma/test_dma.o 00:10:25.214 CC examples/ioat/perf/perf.o 00:10:25.214 CC examples/ioat/verify/verify.o 00:10:25.214 CC test/app/bdev_svc/bdev_svc.o 00:10:25.472 LINK spdk_nvme_perf 00:10:25.472 LINK spdk_nvme_identify 00:10:25.472 LINK verify 00:10:25.472 LINK ioat_perf 00:10:25.472 CC examples/vmd/lsvmd/lsvmd.o 00:10:25.472 LINK spdk_dd 00:10:25.472 LINK bdev_svc 00:10:25.751 LINK lsvmd 00:10:25.751 LINK test_dma 00:10:25.751 CC examples/idxd/perf/perf.o 00:10:25.751 TEST_HEADER include/spdk/accel.h 00:10:25.751 TEST_HEADER include/spdk/accel_module.h 00:10:25.752 TEST_HEADER include/spdk/assert.h 00:10:25.752 TEST_HEADER include/spdk/barrier.h 00:10:25.752 TEST_HEADER include/spdk/base64.h 00:10:25.752 TEST_HEADER include/spdk/bdev.h 00:10:25.752 TEST_HEADER include/spdk/bdev_module.h 00:10:25.752 TEST_HEADER include/spdk/bdev_zone.h 00:10:25.752 TEST_HEADER include/spdk/bit_array.h 00:10:25.752 TEST_HEADER include/spdk/bit_pool.h 00:10:25.752 TEST_HEADER include/spdk/blob_bdev.h 00:10:25.752 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:25.752 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:25.752 TEST_HEADER include/spdk/blobfs.h 00:10:25.752 TEST_HEADER include/spdk/blob.h 00:10:25.752 TEST_HEADER include/spdk/conf.h 00:10:25.752 TEST_HEADER include/spdk/config.h 00:10:25.752 TEST_HEADER include/spdk/cpuset.h 00:10:25.752 TEST_HEADER include/spdk/crc16.h 00:10:26.011 TEST_HEADER include/spdk/crc32.h 00:10:26.011 TEST_HEADER include/spdk/crc64.h 00:10:26.011 TEST_HEADER include/spdk/dif.h 00:10:26.011 TEST_HEADER include/spdk/dma.h 00:10:26.011 TEST_HEADER include/spdk/endian.h 00:10:26.011 TEST_HEADER include/spdk/env_dpdk.h 00:10:26.011 TEST_HEADER include/spdk/env.h 00:10:26.011 TEST_HEADER include/spdk/event.h 00:10:26.011 TEST_HEADER include/spdk/fd_group.h 00:10:26.011 TEST_HEADER include/spdk/fd.h 00:10:26.011 TEST_HEADER include/spdk/file.h 00:10:26.011 TEST_HEADER include/spdk/fsdev.h 00:10:26.011 TEST_HEADER include/spdk/fsdev_module.h 00:10:26.011 TEST_HEADER include/spdk/ftl.h 00:10:26.011 TEST_HEADER include/spdk/gpt_spec.h 00:10:26.011 TEST_HEADER include/spdk/hexlify.h 00:10:26.011 TEST_HEADER include/spdk/histogram_data.h 00:10:26.011 TEST_HEADER include/spdk/idxd.h 00:10:26.011 TEST_HEADER include/spdk/idxd_spec.h 00:10:26.011 TEST_HEADER include/spdk/init.h 00:10:26.011 TEST_HEADER include/spdk/ioat.h 00:10:26.011 TEST_HEADER include/spdk/ioat_spec.h 00:10:26.011 TEST_HEADER include/spdk/iscsi_spec.h 00:10:26.011 TEST_HEADER include/spdk/json.h 00:10:26.011 TEST_HEADER include/spdk/jsonrpc.h 00:10:26.011 TEST_HEADER include/spdk/keyring.h 00:10:26.011 TEST_HEADER include/spdk/keyring_module.h 00:10:26.011 TEST_HEADER include/spdk/likely.h 00:10:26.011 TEST_HEADER include/spdk/log.h 00:10:26.011 TEST_HEADER include/spdk/lvol.h 00:10:26.011 TEST_HEADER include/spdk/md5.h 00:10:26.011 TEST_HEADER include/spdk/memory.h 00:10:26.011 TEST_HEADER include/spdk/mmio.h 00:10:26.011 TEST_HEADER include/spdk/nbd.h 00:10:26.011 CC examples/thread/thread/thread_ex.o 00:10:26.011 TEST_HEADER include/spdk/net.h 00:10:26.011 TEST_HEADER include/spdk/notify.h 00:10:26.011 TEST_HEADER include/spdk/nvme.h 00:10:26.011 TEST_HEADER include/spdk/nvme_intel.h 00:10:26.011 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:26.011 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:26.011 TEST_HEADER include/spdk/nvme_spec.h 00:10:26.011 TEST_HEADER include/spdk/nvme_zns.h 00:10:26.011 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:26.011 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:26.012 TEST_HEADER include/spdk/nvmf.h 00:10:26.012 TEST_HEADER include/spdk/nvmf_spec.h 00:10:26.012 TEST_HEADER include/spdk/nvmf_transport.h 00:10:26.012 TEST_HEADER include/spdk/opal.h 00:10:26.012 TEST_HEADER include/spdk/opal_spec.h 00:10:26.012 TEST_HEADER include/spdk/pci_ids.h 00:10:26.012 TEST_HEADER include/spdk/pipe.h 00:10:26.012 TEST_HEADER include/spdk/queue.h 00:10:26.012 TEST_HEADER include/spdk/reduce.h 00:10:26.012 TEST_HEADER include/spdk/rpc.h 00:10:26.012 TEST_HEADER include/spdk/scheduler.h 00:10:26.012 TEST_HEADER include/spdk/scsi.h 00:10:26.012 TEST_HEADER include/spdk/scsi_spec.h 00:10:26.012 TEST_HEADER include/spdk/sock.h 00:10:26.012 TEST_HEADER include/spdk/stdinc.h 00:10:26.012 TEST_HEADER include/spdk/string.h 00:10:26.012 TEST_HEADER include/spdk/thread.h 00:10:26.012 TEST_HEADER include/spdk/trace.h 00:10:26.012 TEST_HEADER include/spdk/trace_parser.h 00:10:26.012 TEST_HEADER include/spdk/tree.h 00:10:26.012 TEST_HEADER include/spdk/ublk.h 00:10:26.012 TEST_HEADER include/spdk/util.h 00:10:26.012 TEST_HEADER include/spdk/uuid.h 00:10:26.012 TEST_HEADER include/spdk/version.h 00:10:26.012 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:26.012 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:26.012 TEST_HEADER include/spdk/vhost.h 00:10:26.012 TEST_HEADER include/spdk/vmd.h 00:10:26.012 TEST_HEADER include/spdk/xor.h 00:10:26.012 TEST_HEADER include/spdk/zipf.h 00:10:26.012 CXX test/cpp_headers/accel.o 00:10:26.012 LINK interrupt_tgt 00:10:26.012 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:26.012 CC examples/vmd/led/led.o 00:10:26.271 LINK spdk_nvme 00:10:26.271 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:26.271 CC test/env/mem_callbacks/mem_callbacks.o 00:10:26.271 LINK idxd_perf 00:10:26.271 LINK thread 00:10:26.271 LINK led 00:10:26.271 CXX test/cpp_headers/accel_module.o 00:10:26.529 CC app/fio/bdev/fio_plugin.o 00:10:26.529 CC test/app/histogram_perf/histogram_perf.o 00:10:26.529 LINK spdk_top 00:10:26.529 CC test/app/jsoncat/jsoncat.o 00:10:26.529 CXX test/cpp_headers/assert.o 00:10:26.529 CC test/app/stub/stub.o 00:10:26.788 LINK histogram_perf 00:10:26.788 LINK jsoncat 00:10:26.788 LINK nvme_fuzz 00:10:26.788 CC examples/sock/hello_world/hello_sock.o 00:10:26.788 LINK mem_callbacks 00:10:26.788 LINK stub 00:10:26.788 CXX test/cpp_headers/barrier.o 00:10:26.788 CC test/event/event_perf/event_perf.o 00:10:27.049 CC test/nvme/aer/aer.o 00:10:27.049 LINK event_perf 00:10:27.049 CXX test/cpp_headers/base64.o 00:10:27.049 LINK spdk_bdev 00:10:27.049 CC test/env/vtophys/vtophys.o 00:10:27.049 LINK hello_sock 00:10:27.307 CC examples/fsdev/hello_world/hello_fsdev.o 00:10:27.307 CC test/nvme/reset/reset.o 00:10:27.307 LINK vtophys 00:10:27.307 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:27.307 CXX test/cpp_headers/bdev.o 00:10:27.307 CC app/vhost/vhost.o 00:10:27.564 CC test/event/reactor/reactor.o 00:10:27.564 CC test/event/reactor_perf/reactor_perf.o 00:10:27.564 LINK aer 00:10:27.564 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:27.564 CXX test/cpp_headers/bdev_module.o 00:10:27.564 LINK hello_fsdev 00:10:27.564 LINK reset 00:10:27.564 LINK vhost 00:10:27.564 LINK reactor_perf 00:10:27.564 LINK reactor 00:10:27.564 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:27.564 CXX test/cpp_headers/bdev_zone.o 00:10:27.823 LINK env_dpdk_post_init 00:10:27.823 CXX test/cpp_headers/bit_array.o 00:10:27.823 CC test/rpc_client/rpc_client_test.o 00:10:28.081 CC test/event/app_repeat/app_repeat.o 00:10:28.081 CC test/nvme/sgl/sgl.o 00:10:28.081 LINK vhost_fuzz 00:10:28.081 CC test/nvme/e2edp/nvme_dp.o 00:10:28.081 CXX test/cpp_headers/bit_pool.o 00:10:28.081 LINK rpc_client_test 00:10:28.081 CC test/env/memory/memory_ut.o 00:10:28.081 CC test/accel/dif/dif.o 00:10:28.081 CC examples/accel/perf/accel_perf.o 00:10:28.339 LINK app_repeat 00:10:28.339 LINK sgl 00:10:28.339 LINK nvme_dp 00:10:28.339 CC test/env/pci/pci_ut.o 00:10:28.339 CXX test/cpp_headers/blob_bdev.o 00:10:28.339 CC test/nvme/overhead/overhead.o 00:10:28.607 LINK iscsi_fuzz 00:10:28.607 CXX test/cpp_headers/blobfs_bdev.o 00:10:28.607 CXX test/cpp_headers/blobfs.o 00:10:28.607 CC test/event/scheduler/scheduler.o 00:10:28.871 CXX test/cpp_headers/blob.o 00:10:28.871 LINK overhead 00:10:28.871 LINK accel_perf 00:10:28.871 CC test/nvme/err_injection/err_injection.o 00:10:28.871 CC test/blobfs/mkfs/mkfs.o 00:10:28.871 CXX test/cpp_headers/conf.o 00:10:28.871 LINK scheduler 00:10:28.871 LINK pci_ut 00:10:29.163 CC test/nvme/startup/startup.o 00:10:29.163 LINK dif 00:10:29.163 CXX test/cpp_headers/config.o 00:10:29.163 CC test/nvme/reserve/reserve.o 00:10:29.163 LINK err_injection 00:10:29.163 LINK mkfs 00:10:29.163 CXX test/cpp_headers/cpuset.o 00:10:29.163 LINK startup 00:10:29.423 CXX test/cpp_headers/crc16.o 00:10:29.423 CC examples/blob/hello_world/hello_blob.o 00:10:29.423 CXX test/cpp_headers/crc32.o 00:10:29.423 CC examples/blob/cli/blobcli.o 00:10:29.423 LINK reserve 00:10:29.423 LINK memory_ut 00:10:29.423 CC examples/nvme/reconnect/reconnect.o 00:10:29.423 CC examples/nvme/hello_world/hello_world.o 00:10:29.423 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:29.682 CXX test/cpp_headers/crc64.o 00:10:29.682 CC test/lvol/esnap/esnap.o 00:10:29.682 LINK hello_blob 00:10:29.682 CC examples/nvme/arbitration/arbitration.o 00:10:29.682 CC test/nvme/simple_copy/simple_copy.o 00:10:29.682 CXX test/cpp_headers/dif.o 00:10:29.682 LINK hello_world 00:10:29.942 LINK reconnect 00:10:29.942 CXX test/cpp_headers/dma.o 00:10:29.942 LINK blobcli 00:10:29.942 LINK arbitration 00:10:29.942 LINK simple_copy 00:10:29.942 CC examples/nvme/hotplug/hotplug.o 00:10:29.942 CC test/bdev/bdevio/bdevio.o 00:10:30.201 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:30.201 LINK nvme_manage 00:10:30.201 CXX test/cpp_headers/endian.o 00:10:30.201 CC test/nvme/connect_stress/connect_stress.o 00:10:30.201 CXX test/cpp_headers/env_dpdk.o 00:10:30.201 CC test/nvme/boot_partition/boot_partition.o 00:10:30.201 LINK cmb_copy 00:10:30.201 LINK hotplug 00:10:30.460 CC test/nvme/compliance/nvme_compliance.o 00:10:30.460 CXX test/cpp_headers/env.o 00:10:30.460 LINK connect_stress 00:10:30.460 CC examples/nvme/abort/abort.o 00:10:30.460 LINK boot_partition 00:10:30.460 LINK bdevio 00:10:30.460 CC test/nvme/fused_ordering/fused_ordering.o 00:10:30.460 CXX test/cpp_headers/event.o 00:10:30.720 CXX test/cpp_headers/fd_group.o 00:10:30.720 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:30.720 CC examples/bdev/hello_world/hello_bdev.o 00:10:30.720 CXX test/cpp_headers/fd.o 00:10:30.720 LINK nvme_compliance 00:10:30.720 LINK fused_ordering 00:10:30.979 LINK abort 00:10:30.979 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:30.979 CXX test/cpp_headers/file.o 00:10:30.979 CC examples/bdev/bdevperf/bdevperf.o 00:10:30.979 LINK pmr_persistence 00:10:30.979 CC test/nvme/fdp/fdp.o 00:10:30.979 LINK hello_bdev 00:10:30.979 CXX test/cpp_headers/fsdev.o 00:10:30.979 CC test/nvme/cuse/cuse.o 00:10:30.979 CXX test/cpp_headers/fsdev_module.o 00:10:30.979 LINK doorbell_aers 00:10:31.238 CXX test/cpp_headers/ftl.o 00:10:31.238 CXX test/cpp_headers/gpt_spec.o 00:10:31.238 CXX test/cpp_headers/hexlify.o 00:10:31.238 CXX test/cpp_headers/histogram_data.o 00:10:31.238 CXX test/cpp_headers/idxd.o 00:10:31.238 CXX test/cpp_headers/idxd_spec.o 00:10:31.238 LINK fdp 00:10:31.238 CXX test/cpp_headers/init.o 00:10:31.238 CXX test/cpp_headers/ioat.o 00:10:31.238 CXX test/cpp_headers/ioat_spec.o 00:10:31.497 CXX test/cpp_headers/iscsi_spec.o 00:10:31.497 CXX test/cpp_headers/json.o 00:10:31.497 CXX test/cpp_headers/jsonrpc.o 00:10:31.497 CXX test/cpp_headers/keyring.o 00:10:31.497 CXX test/cpp_headers/keyring_module.o 00:10:31.497 CXX test/cpp_headers/likely.o 00:10:31.497 CXX test/cpp_headers/log.o 00:10:31.497 CXX test/cpp_headers/lvol.o 00:10:31.756 CXX test/cpp_headers/md5.o 00:10:31.756 CXX test/cpp_headers/memory.o 00:10:31.756 CXX test/cpp_headers/mmio.o 00:10:31.756 CXX test/cpp_headers/nbd.o 00:10:31.756 CXX test/cpp_headers/net.o 00:10:31.756 CXX test/cpp_headers/notify.o 00:10:31.756 CXX test/cpp_headers/nvme.o 00:10:31.756 CXX test/cpp_headers/nvme_intel.o 00:10:31.756 CXX test/cpp_headers/nvme_ocssd.o 00:10:32.015 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:32.015 CXX test/cpp_headers/nvme_spec.o 00:10:32.015 CXX test/cpp_headers/nvme_zns.o 00:10:32.015 CXX test/cpp_headers/nvmf_cmd.o 00:10:32.015 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:32.015 CXX test/cpp_headers/nvmf.o 00:10:32.015 LINK bdevperf 00:10:32.015 CXX test/cpp_headers/nvmf_spec.o 00:10:32.015 CXX test/cpp_headers/nvmf_transport.o 00:10:32.015 CXX test/cpp_headers/opal.o 00:10:32.015 CXX test/cpp_headers/opal_spec.o 00:10:32.015 CXX test/cpp_headers/pci_ids.o 00:10:32.273 CXX test/cpp_headers/pipe.o 00:10:32.273 CXX test/cpp_headers/queue.o 00:10:32.273 CXX test/cpp_headers/reduce.o 00:10:32.273 CXX test/cpp_headers/rpc.o 00:10:32.273 CXX test/cpp_headers/scheduler.o 00:10:32.273 CXX test/cpp_headers/scsi.o 00:10:32.273 CXX test/cpp_headers/scsi_spec.o 00:10:32.273 CXX test/cpp_headers/sock.o 00:10:32.273 CXX test/cpp_headers/stdinc.o 00:10:32.532 CXX test/cpp_headers/string.o 00:10:32.532 CXX test/cpp_headers/thread.o 00:10:32.532 CXX test/cpp_headers/trace.o 00:10:32.532 CXX test/cpp_headers/trace_parser.o 00:10:32.532 LINK cuse 00:10:32.532 CXX test/cpp_headers/tree.o 00:10:32.532 CXX test/cpp_headers/ublk.o 00:10:32.532 CXX test/cpp_headers/util.o 00:10:32.532 CC examples/nvmf/nvmf/nvmf.o 00:10:32.532 CXX test/cpp_headers/uuid.o 00:10:32.532 CXX test/cpp_headers/version.o 00:10:32.532 CXX test/cpp_headers/vfio_user_pci.o 00:10:32.532 CXX test/cpp_headers/vfio_user_spec.o 00:10:32.790 CXX test/cpp_headers/vhost.o 00:10:32.790 CXX test/cpp_headers/vmd.o 00:10:32.790 CXX test/cpp_headers/xor.o 00:10:32.790 CXX test/cpp_headers/zipf.o 00:10:33.049 LINK nvmf 00:10:36.342 LINK esnap 00:10:36.342 00:10:36.342 real 1m48.170s 00:10:36.342 user 9m53.553s 00:10:36.342 sys 2m19.562s 00:10:36.342 22:52:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:10:36.342 22:52:03 make -- common/autotest_common.sh@10 -- $ set +x 00:10:36.342 ************************************ 00:10:36.342 END TEST make 00:10:36.342 ************************************ 00:10:36.601 22:52:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:36.601 22:52:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:36.601 22:52:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:36.601 22:52:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:36.601 22:52:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:36.601 22:52:03 -- pm/common@44 -- $ pid=5293 00:10:36.601 22:52:03 -- pm/common@50 -- $ kill -TERM 5293 00:10:36.601 22:52:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:36.601 22:52:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:36.601 22:52:03 -- pm/common@44 -- $ pid=5295 00:10:36.601 22:52:03 -- pm/common@50 -- $ kill -TERM 5295 00:10:36.601 22:52:03 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:10:36.601 22:52:03 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:10:36.601 22:52:03 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:36.601 22:52:03 -- common/autotest_common.sh@1711 -- # lcov --version 00:10:36.601 22:52:03 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:36.601 22:52:03 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:36.601 22:52:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.601 22:52:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.601 22:52:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.601 22:52:03 -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.601 22:52:03 -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.601 22:52:03 -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.601 22:52:03 -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.601 22:52:03 -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.601 22:52:03 -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.601 22:52:03 -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.601 22:52:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.601 22:52:03 -- scripts/common.sh@344 -- # case "$op" in 00:10:36.601 22:52:03 -- scripts/common.sh@345 -- # : 1 00:10:36.601 22:52:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.601 22:52:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.602 22:52:03 -- scripts/common.sh@365 -- # decimal 1 00:10:36.602 22:52:03 -- scripts/common.sh@353 -- # local d=1 00:10:36.602 22:52:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.602 22:52:03 -- scripts/common.sh@355 -- # echo 1 00:10:36.602 22:52:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.602 22:52:03 -- scripts/common.sh@366 -- # decimal 2 00:10:36.602 22:52:03 -- scripts/common.sh@353 -- # local d=2 00:10:36.602 22:52:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.602 22:52:03 -- scripts/common.sh@355 -- # echo 2 00:10:36.602 22:52:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.602 22:52:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.602 22:52:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.602 22:52:03 -- scripts/common.sh@368 -- # return 0 00:10:36.602 22:52:03 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.602 22:52:03 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:36.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.602 --rc genhtml_branch_coverage=1 00:10:36.602 --rc genhtml_function_coverage=1 00:10:36.602 --rc genhtml_legend=1 00:10:36.602 --rc geninfo_all_blocks=1 00:10:36.602 --rc geninfo_unexecuted_blocks=1 00:10:36.602 00:10:36.602 ' 00:10:36.602 22:52:03 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:36.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.602 --rc genhtml_branch_coverage=1 00:10:36.602 --rc genhtml_function_coverage=1 00:10:36.602 --rc genhtml_legend=1 00:10:36.602 --rc geninfo_all_blocks=1 00:10:36.602 --rc geninfo_unexecuted_blocks=1 00:10:36.602 00:10:36.602 ' 00:10:36.602 22:52:03 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:36.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.602 --rc genhtml_branch_coverage=1 00:10:36.602 --rc genhtml_function_coverage=1 00:10:36.602 --rc genhtml_legend=1 00:10:36.602 --rc geninfo_all_blocks=1 00:10:36.602 --rc geninfo_unexecuted_blocks=1 00:10:36.602 00:10:36.602 ' 00:10:36.602 22:52:03 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:36.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.602 --rc genhtml_branch_coverage=1 00:10:36.602 --rc genhtml_function_coverage=1 00:10:36.602 --rc genhtml_legend=1 00:10:36.602 --rc geninfo_all_blocks=1 00:10:36.602 --rc geninfo_unexecuted_blocks=1 00:10:36.602 00:10:36.602 ' 00:10:36.602 22:52:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.602 22:52:03 -- nvmf/common.sh@7 -- # uname -s 00:10:36.602 22:52:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.602 22:52:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.602 22:52:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.602 22:52:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.602 22:52:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.602 22:52:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.602 22:52:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.602 22:52:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.602 22:52:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.602 22:52:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.862 22:52:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f63ccfb0-8e1a-4e3a-81ed-f5c6f2fe319a 00:10:36.862 22:52:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=f63ccfb0-8e1a-4e3a-81ed-f5c6f2fe319a 00:10:36.862 22:52:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.862 22:52:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.862 22:52:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:36.862 22:52:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.862 22:52:03 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.862 22:52:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.862 22:52:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.862 22:52:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.862 22:52:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.862 22:52:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.862 22:52:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.862 22:52:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.862 22:52:03 -- paths/export.sh@5 -- # export PATH 00:10:36.862 22:52:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.862 22:52:03 -- nvmf/common.sh@51 -- # : 0 00:10:36.862 22:52:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.862 22:52:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.862 22:52:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.862 22:52:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.862 22:52:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.862 22:52:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.862 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.862 22:52:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.862 22:52:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.862 22:52:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.862 22:52:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:36.862 22:52:03 -- spdk/autotest.sh@32 -- # uname -s 00:10:36.862 22:52:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:36.862 22:52:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:36.862 22:52:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:36.862 22:52:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:36.862 22:52:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:36.862 22:52:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:36.862 22:52:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:36.862 22:52:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:36.862 22:52:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54987 00:10:36.862 22:52:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:36.862 22:52:04 -- pm/common@17 -- # local monitor 00:10:36.862 22:52:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:36.862 22:52:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:36.862 22:52:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:36.862 22:52:04 -- pm/common@25 -- # sleep 1 00:10:36.862 22:52:04 -- pm/common@21 -- # date +%s 00:10:36.862 22:52:04 -- pm/common@21 -- # date +%s 00:10:36.862 22:52:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733784724 00:10:36.862 22:52:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733784724 00:10:36.862 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733784724_collect-cpu-load.pm.log 00:10:36.862 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733784724_collect-vmstat.pm.log 00:10:37.801 22:52:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:37.801 22:52:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:37.801 22:52:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:37.801 22:52:05 -- common/autotest_common.sh@10 -- # set +x 00:10:37.801 22:52:05 -- spdk/autotest.sh@59 -- # create_test_list 00:10:37.801 22:52:05 -- common/autotest_common.sh@752 -- # xtrace_disable 00:10:37.801 22:52:05 -- common/autotest_common.sh@10 -- # set +x 00:10:37.801 22:52:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:37.801 22:52:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:37.801 22:52:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:37.801 22:52:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:37.801 22:52:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:37.801 22:52:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:37.801 22:52:05 -- common/autotest_common.sh@1457 -- # uname 00:10:37.801 22:52:05 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:10:37.801 22:52:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:37.801 22:52:05 -- common/autotest_common.sh@1477 -- # uname 00:10:37.801 22:52:05 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:10:37.801 22:52:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:10:38.060 22:52:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:10:38.060 lcov: LCOV version 1.15 00:10:38.060 22:52:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:56.149 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:56.149 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:11.055 22:52:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:11:11.056 22:52:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:11.056 22:52:36 -- common/autotest_common.sh@10 -- # set +x 00:11:11.056 22:52:36 -- spdk/autotest.sh@78 -- # rm -f 00:11:11.056 22:52:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:11.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.056 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:11.056 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:11.056 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:11:11.056 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:11:11.056 22:52:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:11:11.056 22:52:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:11:11.056 22:52:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:11:11.056 22:52:37 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:11:11.056 22:52:37 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:11:11.056 22:52:37 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:11:11.056 22:52:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:11:11.056 22:52:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:11:11.056 22:52:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:11:11.056 22:52:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:11:11.056 22:52:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:11:11.056 22:52:37 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:11:11.056 22:52:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:11:11.056 22:52:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:11:11.056 22:52:37 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:11:11.056 22:52:37 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:11:11.056 22:52:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:11.056 22:52:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:11:11.056 22:52:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:11:11.056 22:52:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:11.056 22:52:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:11.056 22:52:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:11:11.056 22:52:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:11.056 22:52:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:11.056 No valid GPT data, bailing 00:11:11.056 22:52:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:11.056 22:52:37 -- scripts/common.sh@394 -- # pt= 00:11:11.056 22:52:37 -- scripts/common.sh@395 -- # return 1 00:11:11.056 22:52:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:11.056 1+0 records in 00:11:11.056 1+0 records out 00:11:11.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00376304 s, 279 MB/s 00:11:11.056 22:52:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:11.056 22:52:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:11.056 22:52:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:11:11.056 22:52:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:11:11.056 22:52:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:11.056 No valid GPT data, bailing 00:11:11.056 22:52:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:11.056 22:52:37 -- scripts/common.sh@394 -- # pt= 00:11:11.056 22:52:37 -- scripts/common.sh@395 -- # return 1 00:11:11.056 22:52:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:11.056 1+0 records in 00:11:11.056 1+0 records out 00:11:11.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00627652 s, 167 MB/s 00:11:11.056 22:52:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:11.056 22:52:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:11.056 22:52:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:11:11.056 22:52:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:11:11.056 22:52:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:11:11.056 No valid GPT data, bailing 00:11:11.056 22:52:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:11:11.056 22:52:38 -- scripts/common.sh@394 -- # pt= 00:11:11.056 22:52:38 -- scripts/common.sh@395 -- # return 1 00:11:11.056 22:52:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:11:11.056 1+0 records in 00:11:11.056 1+0 records out 00:11:11.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645836 s, 162 MB/s 00:11:11.056 22:52:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:11.056 22:52:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:11.056 22:52:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:11:11.056 22:52:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:11:11.056 22:52:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:11:11.056 No valid GPT data, bailing 00:11:11.056 22:52:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:11:11.056 22:52:38 -- scripts/common.sh@394 -- # pt= 00:11:11.056 22:52:38 -- scripts/common.sh@395 -- # return 1 00:11:11.056 22:52:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:11:11.056 1+0 records in 00:11:11.056 1+0 records out 00:11:11.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653704 s, 160 MB/s 00:11:11.056 22:52:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:11.056 22:52:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:11.056 22:52:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:11:11.056 22:52:38 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:11:11.056 22:52:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:11:11.056 No valid GPT data, bailing 00:11:11.056 22:52:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:11:11.056 22:52:38 -- scripts/common.sh@394 -- # pt= 00:11:11.056 22:52:38 -- scripts/common.sh@395 -- # return 1 00:11:11.056 22:52:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:11:11.056 1+0 records in 00:11:11.056 1+0 records out 00:11:11.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165618 s, 63.3 MB/s 00:11:11.056 22:52:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:11:11.056 22:52:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:11:11.056 22:52:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:11:11.056 22:52:38 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:11:11.056 22:52:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:11:11.056 No valid GPT data, bailing 00:11:11.056 22:52:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:11:11.056 22:52:38 -- scripts/common.sh@394 -- # pt= 00:11:11.056 22:52:38 -- scripts/common.sh@395 -- # return 1 00:11:11.056 22:52:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:11:11.056 1+0 records in 00:11:11.056 1+0 records out 00:11:11.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449445 s, 233 MB/s 00:11:11.056 22:52:38 -- spdk/autotest.sh@105 -- # sync 00:11:11.056 22:52:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:11.056 22:52:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:11.056 22:52:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:14.345 22:52:41 -- spdk/autotest.sh@111 -- # uname -s 00:11:14.345 22:52:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:11:14.345 22:52:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:11:14.345 22:52:41 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:14.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.172 Hugepages 00:11:15.172 node hugesize free / total 00:11:15.172 node0 1048576kB 0 / 0 00:11:15.172 node0 2048kB 0 / 0 00:11:15.172 00:11:15.172 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:15.431 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:15.431 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:11:15.690 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:11:15.690 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:11:15.949 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:11:15.949 22:52:43 -- spdk/autotest.sh@117 -- # uname -s 00:11:15.949 22:52:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:11:15.949 22:52:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:11:15.949 22:52:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:16.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.455 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.455 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.455 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.455 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:17.455 22:52:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:11:18.390 22:52:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:11:18.390 22:52:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:11:18.390 22:52:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:18.390 22:52:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:18.390 22:52:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:18.390 22:52:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:18.390 22:52:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:18.390 22:52:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:18.390 22:52:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:18.649 22:52:45 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:18.649 22:52:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:18.649 22:52:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:19.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:19.545 Waiting for block devices as requested 00:11:19.545 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.545 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.804 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:19.804 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.078 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:25.078 22:52:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:25.078 22:52:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:11:25.078 22:52:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:25.078 22:52:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1543 -- # continue 00:11:25.078 22:52:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:25.078 22:52:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1543 -- # continue 00:11:25.078 22:52:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:25.078 22:52:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1543 -- # continue 00:11:25.078 22:52:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:11:25.078 22:52:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:25.078 22:52:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:25.078 22:52:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:25.078 22:52:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:25.078 22:52:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:25.078 22:52:52 -- common/autotest_common.sh@1543 -- # continue 00:11:25.078 22:52:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:11:25.078 22:52:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.078 22:52:52 -- common/autotest_common.sh@10 -- # set +x 00:11:25.337 22:52:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:11:25.337 22:52:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.337 22:52:52 -- common/autotest_common.sh@10 -- # set +x 00:11:25.337 22:52:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.906 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:26.860 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.860 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.860 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.860 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:26.860 22:52:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:11:26.860 22:52:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.860 22:52:54 -- common/autotest_common.sh@10 -- # set +x 00:11:26.860 22:52:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:11:26.860 22:52:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:11:26.860 22:52:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:11:26.860 22:52:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:11:26.860 22:52:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:11:26.860 22:52:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:11:26.860 22:52:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:11:26.860 22:52:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:11:26.860 22:52:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:26.860 22:52:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:26.860 22:52:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:26.860 22:52:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:26.860 22:52:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:27.120 22:52:54 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:27.120 22:52:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:27.120 22:52:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:27.120 22:52:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:27.120 22:52:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:27.120 22:52:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:27.120 22:52:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:27.120 22:52:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:27.120 22:52:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:11:27.120 22:52:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:27.120 22:52:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:27.120 22:52:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:11:27.120 22:52:54 -- common/autotest_common.sh@1572 -- # return 0 00:11:27.120 22:52:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:11:27.120 22:52:54 -- common/autotest_common.sh@1580 -- # return 0 00:11:27.120 22:52:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:11:27.120 22:52:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:11:27.120 22:52:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:27.120 22:52:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:11:27.120 22:52:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:11:27.120 22:52:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.120 22:52:54 -- common/autotest_common.sh@10 -- # set +x 00:11:27.120 22:52:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:11:27.120 22:52:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:27.120 22:52:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.120 22:52:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.120 22:52:54 -- common/autotest_common.sh@10 -- # set +x 00:11:27.120 ************************************ 00:11:27.120 START TEST env 00:11:27.120 ************************************ 00:11:27.120 22:52:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:27.120 * Looking for test storage... 00:11:27.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:27.120 22:52:54 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.120 22:52:54 env -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.120 22:52:54 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.379 22:52:54 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.379 22:52:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.379 22:52:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.379 22:52:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.379 22:52:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.379 22:52:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.379 22:52:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.379 22:52:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.379 22:52:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.379 22:52:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.379 22:52:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.379 22:52:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.379 22:52:54 env -- scripts/common.sh@344 -- # case "$op" in 00:11:27.379 22:52:54 env -- scripts/common.sh@345 -- # : 1 00:11:27.379 22:52:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.379 22:52:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.379 22:52:54 env -- scripts/common.sh@365 -- # decimal 1 00:11:27.379 22:52:54 env -- scripts/common.sh@353 -- # local d=1 00:11:27.379 22:52:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.379 22:52:54 env -- scripts/common.sh@355 -- # echo 1 00:11:27.379 22:52:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.379 22:52:54 env -- scripts/common.sh@366 -- # decimal 2 00:11:27.379 22:52:54 env -- scripts/common.sh@353 -- # local d=2 00:11:27.379 22:52:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.379 22:52:54 env -- scripts/common.sh@355 -- # echo 2 00:11:27.379 22:52:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.379 22:52:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.379 22:52:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.379 22:52:54 env -- scripts/common.sh@368 -- # return 0 00:11:27.379 22:52:54 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.379 22:52:54 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.380 --rc genhtml_branch_coverage=1 00:11:27.380 --rc genhtml_function_coverage=1 00:11:27.380 --rc genhtml_legend=1 00:11:27.380 --rc geninfo_all_blocks=1 00:11:27.380 --rc geninfo_unexecuted_blocks=1 00:11:27.380 00:11:27.380 ' 00:11:27.380 22:52:54 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.380 --rc genhtml_branch_coverage=1 00:11:27.380 --rc genhtml_function_coverage=1 00:11:27.380 --rc genhtml_legend=1 00:11:27.380 --rc geninfo_all_blocks=1 00:11:27.380 --rc geninfo_unexecuted_blocks=1 00:11:27.380 00:11:27.380 ' 00:11:27.380 22:52:54 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.380 --rc genhtml_branch_coverage=1 00:11:27.380 --rc genhtml_function_coverage=1 00:11:27.380 --rc genhtml_legend=1 00:11:27.380 --rc geninfo_all_blocks=1 00:11:27.380 --rc geninfo_unexecuted_blocks=1 00:11:27.380 00:11:27.380 ' 00:11:27.380 22:52:54 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.380 --rc genhtml_branch_coverage=1 00:11:27.380 --rc genhtml_function_coverage=1 00:11:27.380 --rc genhtml_legend=1 00:11:27.380 --rc geninfo_all_blocks=1 00:11:27.380 --rc geninfo_unexecuted_blocks=1 00:11:27.380 00:11:27.380 ' 00:11:27.380 22:52:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:27.380 22:52:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.380 22:52:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.380 22:52:54 env -- common/autotest_common.sh@10 -- # set +x 00:11:27.380 ************************************ 00:11:27.380 START TEST env_memory 00:11:27.380 ************************************ 00:11:27.380 22:52:54 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:27.380 00:11:27.380 00:11:27.380 CUnit - A unit testing framework for C - Version 2.1-3 00:11:27.380 http://cunit.sourceforge.net/ 00:11:27.380 00:11:27.380 00:11:27.380 Suite: memory 00:11:27.380 Test: alloc and free memory map ...[2024-12-09 22:52:54.616528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:27.380 passed 00:11:27.380 Test: mem map translation ...[2024-12-09 22:52:54.680153] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:27.380 [2024-12-09 22:52:54.680342] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:27.380 [2024-12-09 22:52:54.680561] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:27.380 [2024-12-09 22:52:54.680624] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:27.640 passed 00:11:27.640 Test: mem map registration ...[2024-12-09 22:52:54.756502] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:11:27.640 [2024-12-09 22:52:54.756680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:11:27.640 passed 00:11:27.640 Test: mem map adjacent registrations ...passed 00:11:27.640 00:11:27.640 Run Summary: Type Total Ran Passed Failed Inactive 00:11:27.640 suites 1 1 n/a 0 0 00:11:27.640 tests 4 4 4 0 0 00:11:27.640 asserts 152 152 152 0 n/a 00:11:27.640 00:11:27.640 Elapsed time = 0.272 seconds 00:11:27.640 00:11:27.640 ************************************ 00:11:27.640 END TEST env_memory 00:11:27.640 ************************************ 00:11:27.640 real 0m0.321s 00:11:27.640 user 0m0.282s 00:11:27.640 sys 0m0.027s 00:11:27.640 22:52:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.640 22:52:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:27.640 22:52:54 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:27.640 22:52:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:27.640 22:52:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.640 22:52:54 env -- common/autotest_common.sh@10 -- # set +x 00:11:27.640 ************************************ 00:11:27.640 START TEST env_vtophys 00:11:27.640 ************************************ 00:11:27.640 22:52:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:27.949 EAL: lib.eal log level changed from notice to debug 00:11:27.949 EAL: Detected lcore 0 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 1 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 2 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 3 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 4 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 5 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 6 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 7 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 8 as core 0 on socket 0 00:11:27.949 EAL: Detected lcore 9 as core 0 on socket 0 00:11:27.949 EAL: Maximum logical cores by configuration: 128 00:11:27.949 EAL: Detected CPU lcores: 10 00:11:27.949 EAL: Detected NUMA nodes: 1 00:11:27.949 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:11:27.949 EAL: Detected shared linkage of DPDK 00:11:27.949 EAL: No shared files mode enabled, IPC will be disabled 00:11:27.949 EAL: Selected IOVA mode 'PA' 00:11:27.949 EAL: Probing VFIO support... 00:11:27.949 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:27.949 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:27.949 EAL: Ask a virtual area of 0x2e000 bytes 00:11:27.949 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:27.949 EAL: Setting up physically contiguous memory... 00:11:27.949 EAL: Setting maximum number of open files to 524288 00:11:27.949 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:27.949 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:27.949 EAL: Ask a virtual area of 0x61000 bytes 00:11:27.949 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:27.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:27.949 EAL: Ask a virtual area of 0x400000000 bytes 00:11:27.949 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:27.949 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:27.949 EAL: Ask a virtual area of 0x61000 bytes 00:11:27.949 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:27.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:27.949 EAL: Ask a virtual area of 0x400000000 bytes 00:11:27.949 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:27.949 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:27.949 EAL: Ask a virtual area of 0x61000 bytes 00:11:27.949 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:27.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:27.949 EAL: Ask a virtual area of 0x400000000 bytes 00:11:27.949 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:27.949 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:27.949 EAL: Ask a virtual area of 0x61000 bytes 00:11:27.949 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:27.949 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:27.949 EAL: Ask a virtual area of 0x400000000 bytes 00:11:27.949 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:27.949 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:27.949 EAL: Hugepages will be freed exactly as allocated. 00:11:27.949 EAL: No shared files mode enabled, IPC is disabled 00:11:27.949 EAL: No shared files mode enabled, IPC is disabled 00:11:27.949 EAL: TSC frequency is ~2490000 KHz 00:11:27.949 EAL: Main lcore 0 is ready (tid=7fca47d1ca40;cpuset=[0]) 00:11:27.949 EAL: Trying to obtain current memory policy. 00:11:27.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:27.949 EAL: Restoring previous memory policy: 0 00:11:27.949 EAL: request: mp_malloc_sync 00:11:27.949 EAL: No shared files mode enabled, IPC is disabled 00:11:27.949 EAL: Heap on socket 0 was expanded by 2MB 00:11:27.949 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:27.949 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:27.949 EAL: Mem event callback 'spdk:(nil)' registered 00:11:27.949 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:27.949 00:11:27.949 00:11:27.949 CUnit - A unit testing framework for C - Version 2.1-3 00:11:27.949 http://cunit.sourceforge.net/ 00:11:27.949 00:11:27.949 00:11:27.949 Suite: components_suite 00:11:28.531 Test: vtophys_malloc_test ...passed 00:11:28.531 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:28.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:28.531 EAL: Restoring previous memory policy: 4 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was expanded by 4MB 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was shrunk by 4MB 00:11:28.531 EAL: Trying to obtain current memory policy. 00:11:28.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:28.531 EAL: Restoring previous memory policy: 4 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was expanded by 6MB 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was shrunk by 6MB 00:11:28.531 EAL: Trying to obtain current memory policy. 00:11:28.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:28.531 EAL: Restoring previous memory policy: 4 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was expanded by 10MB 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was shrunk by 10MB 00:11:28.531 EAL: Trying to obtain current memory policy. 00:11:28.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:28.531 EAL: Restoring previous memory policy: 4 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was expanded by 18MB 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was shrunk by 18MB 00:11:28.531 EAL: Trying to obtain current memory policy. 00:11:28.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:28.531 EAL: Restoring previous memory policy: 4 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was expanded by 34MB 00:11:28.531 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.531 EAL: request: mp_malloc_sync 00:11:28.531 EAL: No shared files mode enabled, IPC is disabled 00:11:28.531 EAL: Heap on socket 0 was shrunk by 34MB 00:11:28.791 EAL: Trying to obtain current memory policy. 00:11:28.791 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:28.791 EAL: Restoring previous memory policy: 4 00:11:28.791 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.791 EAL: request: mp_malloc_sync 00:11:28.791 EAL: No shared files mode enabled, IPC is disabled 00:11:28.791 EAL: Heap on socket 0 was expanded by 66MB 00:11:28.791 EAL: Calling mem event callback 'spdk:(nil)' 00:11:28.791 EAL: request: mp_malloc_sync 00:11:28.791 EAL: No shared files mode enabled, IPC is disabled 00:11:28.791 EAL: Heap on socket 0 was shrunk by 66MB 00:11:29.049 EAL: Trying to obtain current memory policy. 00:11:29.050 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:29.050 EAL: Restoring previous memory policy: 4 00:11:29.050 EAL: Calling mem event callback 'spdk:(nil)' 00:11:29.050 EAL: request: mp_malloc_sync 00:11:29.050 EAL: No shared files mode enabled, IPC is disabled 00:11:29.050 EAL: Heap on socket 0 was expanded by 130MB 00:11:29.309 EAL: Calling mem event callback 'spdk:(nil)' 00:11:29.309 EAL: request: mp_malloc_sync 00:11:29.309 EAL: No shared files mode enabled, IPC is disabled 00:11:29.309 EAL: Heap on socket 0 was shrunk by 130MB 00:11:29.567 EAL: Trying to obtain current memory policy. 00:11:29.567 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:29.567 EAL: Restoring previous memory policy: 4 00:11:29.567 EAL: Calling mem event callback 'spdk:(nil)' 00:11:29.567 EAL: request: mp_malloc_sync 00:11:29.567 EAL: No shared files mode enabled, IPC is disabled 00:11:29.567 EAL: Heap on socket 0 was expanded by 258MB 00:11:30.136 EAL: Calling mem event callback 'spdk:(nil)' 00:11:30.136 EAL: request: mp_malloc_sync 00:11:30.136 EAL: No shared files mode enabled, IPC is disabled 00:11:30.136 EAL: Heap on socket 0 was shrunk by 258MB 00:11:30.395 EAL: Trying to obtain current memory policy. 00:11:30.395 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:30.654 EAL: Restoring previous memory policy: 4 00:11:30.654 EAL: Calling mem event callback 'spdk:(nil)' 00:11:30.654 EAL: request: mp_malloc_sync 00:11:30.654 EAL: No shared files mode enabled, IPC is disabled 00:11:30.654 EAL: Heap on socket 0 was expanded by 514MB 00:11:31.590 EAL: Calling mem event callback 'spdk:(nil)' 00:11:31.849 EAL: request: mp_malloc_sync 00:11:31.849 EAL: No shared files mode enabled, IPC is disabled 00:11:31.849 EAL: Heap on socket 0 was shrunk by 514MB 00:11:32.787 EAL: Trying to obtain current memory policy. 00:11:32.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:33.046 EAL: Restoring previous memory policy: 4 00:11:33.046 EAL: Calling mem event callback 'spdk:(nil)' 00:11:33.046 EAL: request: mp_malloc_sync 00:11:33.046 EAL: No shared files mode enabled, IPC is disabled 00:11:33.046 EAL: Heap on socket 0 was expanded by 1026MB 00:11:34.951 EAL: Calling mem event callback 'spdk:(nil)' 00:11:35.241 EAL: request: mp_malloc_sync 00:11:35.241 EAL: No shared files mode enabled, IPC is disabled 00:11:35.241 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:37.154 passed 00:11:37.154 00:11:37.154 Run Summary: Type Total Ran Passed Failed Inactive 00:11:37.154 suites 1 1 n/a 0 0 00:11:37.154 tests 2 2 2 0 0 00:11:37.154 asserts 5817 5817 5817 0 n/a 00:11:37.154 00:11:37.154 Elapsed time = 8.807 seconds 00:11:37.154 EAL: Calling mem event callback 'spdk:(nil)' 00:11:37.154 EAL: request: mp_malloc_sync 00:11:37.154 EAL: No shared files mode enabled, IPC is disabled 00:11:37.154 EAL: Heap on socket 0 was shrunk by 2MB 00:11:37.154 EAL: No shared files mode enabled, IPC is disabled 00:11:37.154 EAL: No shared files mode enabled, IPC is disabled 00:11:37.154 EAL: No shared files mode enabled, IPC is disabled 00:11:37.154 ************************************ 00:11:37.154 END TEST env_vtophys 00:11:37.154 ************************************ 00:11:37.154 00:11:37.154 real 0m9.157s 00:11:37.154 user 0m7.827s 00:11:37.154 sys 0m1.152s 00:11:37.154 22:53:04 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.154 22:53:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:37.154 22:53:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:37.154 22:53:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.154 22:53:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.154 22:53:04 env -- common/autotest_common.sh@10 -- # set +x 00:11:37.154 ************************************ 00:11:37.154 START TEST env_pci 00:11:37.154 ************************************ 00:11:37.154 22:53:04 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:37.154 00:11:37.154 00:11:37.154 CUnit - A unit testing framework for C - Version 2.1-3 00:11:37.154 http://cunit.sourceforge.net/ 00:11:37.154 00:11:37.154 00:11:37.154 Suite: pci 00:11:37.154 Test: pci_hook ...[2024-12-09 22:53:04.189982] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57871 has claimed it 00:11:37.154 EAL: Cannot find device (10000:00:01.0) 00:11:37.154 passed 00:11:37.154 00:11:37.154 Run Summary: Type Total Ran Passed Failed Inactive 00:11:37.154 suites 1 1 n/a 0 0 00:11:37.154 tests 1 1 1 0 0 00:11:37.154 asserts 25 25 25 0 n/a 00:11:37.154 00:11:37.154 Elapsed time = 0.008 seconds 00:11:37.154 EAL: Failed to attach device on primary process 00:11:37.154 00:11:37.154 real 0m0.113s 00:11:37.154 user 0m0.044s 00:11:37.154 sys 0m0.067s 00:11:37.154 22:53:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.154 22:53:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:37.154 ************************************ 00:11:37.154 END TEST env_pci 00:11:37.154 ************************************ 00:11:37.154 22:53:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:37.154 22:53:04 env -- env/env.sh@15 -- # uname 00:11:37.154 22:53:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:37.154 22:53:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:37.154 22:53:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:37.154 22:53:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:37.154 22:53:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.154 22:53:04 env -- common/autotest_common.sh@10 -- # set +x 00:11:37.154 ************************************ 00:11:37.154 START TEST env_dpdk_post_init 00:11:37.154 ************************************ 00:11:37.154 22:53:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:37.154 EAL: Detected CPU lcores: 10 00:11:37.154 EAL: Detected NUMA nodes: 1 00:11:37.154 EAL: Detected shared linkage of DPDK 00:11:37.154 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:37.154 EAL: Selected IOVA mode 'PA' 00:11:37.413 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:37.413 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:37.413 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:37.413 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:11:37.413 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:11:37.413 Starting DPDK initialization... 00:11:37.413 Starting SPDK post initialization... 00:11:37.413 SPDK NVMe probe 00:11:37.413 Attaching to 0000:00:10.0 00:11:37.413 Attaching to 0000:00:11.0 00:11:37.413 Attaching to 0000:00:12.0 00:11:37.413 Attaching to 0000:00:13.0 00:11:37.413 Attached to 0000:00:10.0 00:11:37.413 Attached to 0000:00:11.0 00:11:37.413 Attached to 0000:00:13.0 00:11:37.413 Attached to 0000:00:12.0 00:11:37.413 Cleaning up... 00:11:37.413 ************************************ 00:11:37.413 END TEST env_dpdk_post_init 00:11:37.413 ************************************ 00:11:37.413 00:11:37.413 real 0m0.323s 00:11:37.413 user 0m0.096s 00:11:37.413 sys 0m0.129s 00:11:37.413 22:53:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.413 22:53:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:37.413 22:53:04 env -- env/env.sh@26 -- # uname 00:11:37.413 22:53:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:37.413 22:53:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:37.413 22:53:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.413 22:53:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.413 22:53:04 env -- common/autotest_common.sh@10 -- # set +x 00:11:37.413 ************************************ 00:11:37.413 START TEST env_mem_callbacks 00:11:37.413 ************************************ 00:11:37.413 22:53:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:37.672 EAL: Detected CPU lcores: 10 00:11:37.672 EAL: Detected NUMA nodes: 1 00:11:37.672 EAL: Detected shared linkage of DPDK 00:11:37.672 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:37.672 EAL: Selected IOVA mode 'PA' 00:11:37.672 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:37.672 00:11:37.672 00:11:37.672 CUnit - A unit testing framework for C - Version 2.1-3 00:11:37.672 http://cunit.sourceforge.net/ 00:11:37.672 00:11:37.672 00:11:37.672 Suite: memory 00:11:37.672 Test: test ... 00:11:37.672 register 0x200000200000 2097152 00:11:37.672 malloc 3145728 00:11:37.672 register 0x200000400000 4194304 00:11:37.672 buf 0x2000004fffc0 len 3145728 PASSED 00:11:37.672 malloc 64 00:11:37.672 buf 0x2000004ffec0 len 64 PASSED 00:11:37.672 malloc 4194304 00:11:37.672 register 0x200000800000 6291456 00:11:37.672 buf 0x2000009fffc0 len 4194304 PASSED 00:11:37.672 free 0x2000004fffc0 3145728 00:11:37.672 free 0x2000004ffec0 64 00:11:37.672 unregister 0x200000400000 4194304 PASSED 00:11:37.672 free 0x2000009fffc0 4194304 00:11:37.672 unregister 0x200000800000 6291456 PASSED 00:11:37.672 malloc 8388608 00:11:37.672 register 0x200000400000 10485760 00:11:37.672 buf 0x2000005fffc0 len 8388608 PASSED 00:11:37.672 free 0x2000005fffc0 8388608 00:11:37.672 unregister 0x200000400000 10485760 PASSED 00:11:37.672 passed 00:11:37.673 00:11:37.673 Run Summary: Type Total Ran Passed Failed Inactive 00:11:37.673 suites 1 1 n/a 0 0 00:11:37.673 tests 1 1 1 0 0 00:11:37.673 asserts 15 15 15 0 n/a 00:11:37.673 00:11:37.673 Elapsed time = 0.081 seconds 00:11:37.933 00:11:37.933 real 0m0.299s 00:11:37.933 user 0m0.118s 00:11:37.933 sys 0m0.077s 00:11:37.933 22:53:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.933 ************************************ 00:11:37.933 END TEST env_mem_callbacks 00:11:37.933 ************************************ 00:11:37.933 22:53:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:37.933 ************************************ 00:11:37.933 END TEST env 00:11:37.933 ************************************ 00:11:37.933 00:11:37.933 real 0m10.762s 00:11:37.933 user 0m8.588s 00:11:37.933 sys 0m1.791s 00:11:37.933 22:53:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.933 22:53:05 env -- common/autotest_common.sh@10 -- # set +x 00:11:37.933 22:53:05 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:37.933 22:53:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.933 22:53:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.933 22:53:05 -- common/autotest_common.sh@10 -- # set +x 00:11:37.933 ************************************ 00:11:37.933 START TEST rpc 00:11:37.933 ************************************ 00:11:37.933 22:53:05 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:37.933 * Looking for test storage... 00:11:37.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:37.933 22:53:05 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.193 22:53:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.193 22:53:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.193 22:53:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.193 22:53:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.193 22:53:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.193 22:53:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.193 22:53:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.193 22:53:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:38.193 22:53:05 rpc -- scripts/common.sh@345 -- # : 1 00:11:38.193 22:53:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.193 22:53:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.193 22:53:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:11:38.193 22:53:05 rpc -- scripts/common.sh@353 -- # local d=1 00:11:38.193 22:53:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.193 22:53:05 rpc -- scripts/common.sh@355 -- # echo 1 00:11:38.193 22:53:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.193 22:53:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@353 -- # local d=2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.193 22:53:05 rpc -- scripts/common.sh@355 -- # echo 2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.193 22:53:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.193 22:53:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.193 22:53:05 rpc -- scripts/common.sh@368 -- # return 0 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.193 --rc genhtml_branch_coverage=1 00:11:38.193 --rc genhtml_function_coverage=1 00:11:38.193 --rc genhtml_legend=1 00:11:38.193 --rc geninfo_all_blocks=1 00:11:38.193 --rc geninfo_unexecuted_blocks=1 00:11:38.193 00:11:38.193 ' 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.193 --rc genhtml_branch_coverage=1 00:11:38.193 --rc genhtml_function_coverage=1 00:11:38.193 --rc genhtml_legend=1 00:11:38.193 --rc geninfo_all_blocks=1 00:11:38.193 --rc geninfo_unexecuted_blocks=1 00:11:38.193 00:11:38.193 ' 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.193 --rc genhtml_branch_coverage=1 00:11:38.193 --rc genhtml_function_coverage=1 00:11:38.193 --rc genhtml_legend=1 00:11:38.193 --rc geninfo_all_blocks=1 00:11:38.193 --rc geninfo_unexecuted_blocks=1 00:11:38.193 00:11:38.193 ' 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.193 --rc genhtml_branch_coverage=1 00:11:38.193 --rc genhtml_function_coverage=1 00:11:38.193 --rc genhtml_legend=1 00:11:38.193 --rc geninfo_all_blocks=1 00:11:38.193 --rc geninfo_unexecuted_blocks=1 00:11:38.193 00:11:38.193 ' 00:11:38.193 22:53:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57998 00:11:38.193 22:53:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:38.193 22:53:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:38.193 22:53:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57998 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 57998 ']' 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.193 22:53:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.193 [2024-12-09 22:53:05.483417] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:11:38.193 [2024-12-09 22:53:05.483776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57998 ] 00:11:38.452 [2024-12-09 22:53:05.665999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.712 [2024-12-09 22:53:05.815677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:38.712 [2024-12-09 22:53:05.815944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57998' to capture a snapshot of events at runtime. 00:11:38.712 [2024-12-09 22:53:05.816107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.712 [2024-12-09 22:53:05.816167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.712 [2024-12-09 22:53:05.816198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57998 for offline analysis/debug. 00:11:38.712 [2024-12-09 22:53:05.817610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.649 22:53:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.649 22:53:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:11:39.649 22:53:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:39.649 22:53:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:39.649 22:53:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:39.649 22:53:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:39.649 22:53:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.649 22:53:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.649 22:53:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.649 ************************************ 00:11:39.649 START TEST rpc_integrity 00:11:39.649 ************************************ 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.649 22:53:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:39.649 { 00:11:39.649 "name": "Malloc0", 00:11:39.649 "aliases": [ 00:11:39.649 "30aa7674-6292-421d-bcc5-6e5f89cac581" 00:11:39.649 ], 00:11:39.649 "product_name": "Malloc disk", 00:11:39.649 "block_size": 512, 00:11:39.649 "num_blocks": 16384, 00:11:39.649 "uuid": "30aa7674-6292-421d-bcc5-6e5f89cac581", 00:11:39.649 "assigned_rate_limits": { 00:11:39.649 "rw_ios_per_sec": 0, 00:11:39.649 "rw_mbytes_per_sec": 0, 00:11:39.649 "r_mbytes_per_sec": 0, 00:11:39.649 "w_mbytes_per_sec": 0 00:11:39.649 }, 00:11:39.649 "claimed": false, 00:11:39.649 "zoned": false, 00:11:39.649 "supported_io_types": { 00:11:39.649 "read": true, 00:11:39.649 "write": true, 00:11:39.649 "unmap": true, 00:11:39.649 "flush": true, 00:11:39.649 "reset": true, 00:11:39.649 "nvme_admin": false, 00:11:39.649 "nvme_io": false, 00:11:39.649 "nvme_io_md": false, 00:11:39.649 "write_zeroes": true, 00:11:39.649 "zcopy": true, 00:11:39.649 "get_zone_info": false, 00:11:39.649 "zone_management": false, 00:11:39.649 "zone_append": false, 00:11:39.649 "compare": false, 00:11:39.649 "compare_and_write": false, 00:11:39.649 "abort": true, 00:11:39.649 "seek_hole": false, 00:11:39.649 "seek_data": false, 00:11:39.649 "copy": true, 00:11:39.649 "nvme_iov_md": false 00:11:39.649 }, 00:11:39.649 "memory_domains": [ 00:11:39.649 { 00:11:39.649 "dma_device_id": "system", 00:11:39.649 "dma_device_type": 1 00:11:39.649 }, 00:11:39.649 { 00:11:39.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.649 "dma_device_type": 2 00:11:39.649 } 00:11:39.649 ], 00:11:39.649 "driver_specific": {} 00:11:39.649 } 00:11:39.649 ]' 00:11:39.649 22:53:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:39.908 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:39.908 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:39.908 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.908 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.909 [2024-12-09 22:53:07.025369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:39.909 [2024-12-09 22:53:07.025440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.909 [2024-12-09 22:53:07.025477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:39.909 [2024-12-09 22:53:07.025492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.909 [2024-12-09 22:53:07.028338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.909 [2024-12-09 22:53:07.028512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:39.909 Passthru0 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:39.909 { 00:11:39.909 "name": "Malloc0", 00:11:39.909 "aliases": [ 00:11:39.909 "30aa7674-6292-421d-bcc5-6e5f89cac581" 00:11:39.909 ], 00:11:39.909 "product_name": "Malloc disk", 00:11:39.909 "block_size": 512, 00:11:39.909 "num_blocks": 16384, 00:11:39.909 "uuid": "30aa7674-6292-421d-bcc5-6e5f89cac581", 00:11:39.909 "assigned_rate_limits": { 00:11:39.909 "rw_ios_per_sec": 0, 00:11:39.909 "rw_mbytes_per_sec": 0, 00:11:39.909 "r_mbytes_per_sec": 0, 00:11:39.909 "w_mbytes_per_sec": 0 00:11:39.909 }, 00:11:39.909 "claimed": true, 00:11:39.909 "claim_type": "exclusive_write", 00:11:39.909 "zoned": false, 00:11:39.909 "supported_io_types": { 00:11:39.909 "read": true, 00:11:39.909 "write": true, 00:11:39.909 "unmap": true, 00:11:39.909 "flush": true, 00:11:39.909 "reset": true, 00:11:39.909 "nvme_admin": false, 00:11:39.909 "nvme_io": false, 00:11:39.909 "nvme_io_md": false, 00:11:39.909 "write_zeroes": true, 00:11:39.909 "zcopy": true, 00:11:39.909 "get_zone_info": false, 00:11:39.909 "zone_management": false, 00:11:39.909 "zone_append": false, 00:11:39.909 "compare": false, 00:11:39.909 "compare_and_write": false, 00:11:39.909 "abort": true, 00:11:39.909 "seek_hole": false, 00:11:39.909 "seek_data": false, 00:11:39.909 "copy": true, 00:11:39.909 "nvme_iov_md": false 00:11:39.909 }, 00:11:39.909 "memory_domains": [ 00:11:39.909 { 00:11:39.909 "dma_device_id": "system", 00:11:39.909 "dma_device_type": 1 00:11:39.909 }, 00:11:39.909 { 00:11:39.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.909 "dma_device_type": 2 00:11:39.909 } 00:11:39.909 ], 00:11:39.909 "driver_specific": {} 00:11:39.909 }, 00:11:39.909 { 00:11:39.909 "name": "Passthru0", 00:11:39.909 "aliases": [ 00:11:39.909 "9be623f7-5846-5210-b9e7-a6c776124cc4" 00:11:39.909 ], 00:11:39.909 "product_name": "passthru", 00:11:39.909 "block_size": 512, 00:11:39.909 "num_blocks": 16384, 00:11:39.909 "uuid": "9be623f7-5846-5210-b9e7-a6c776124cc4", 00:11:39.909 "assigned_rate_limits": { 00:11:39.909 "rw_ios_per_sec": 0, 00:11:39.909 "rw_mbytes_per_sec": 0, 00:11:39.909 "r_mbytes_per_sec": 0, 00:11:39.909 "w_mbytes_per_sec": 0 00:11:39.909 }, 00:11:39.909 "claimed": false, 00:11:39.909 "zoned": false, 00:11:39.909 "supported_io_types": { 00:11:39.909 "read": true, 00:11:39.909 "write": true, 00:11:39.909 "unmap": true, 00:11:39.909 "flush": true, 00:11:39.909 "reset": true, 00:11:39.909 "nvme_admin": false, 00:11:39.909 "nvme_io": false, 00:11:39.909 "nvme_io_md": false, 00:11:39.909 "write_zeroes": true, 00:11:39.909 "zcopy": true, 00:11:39.909 "get_zone_info": false, 00:11:39.909 "zone_management": false, 00:11:39.909 "zone_append": false, 00:11:39.909 "compare": false, 00:11:39.909 "compare_and_write": false, 00:11:39.909 "abort": true, 00:11:39.909 "seek_hole": false, 00:11:39.909 "seek_data": false, 00:11:39.909 "copy": true, 00:11:39.909 "nvme_iov_md": false 00:11:39.909 }, 00:11:39.909 "memory_domains": [ 00:11:39.909 { 00:11:39.909 "dma_device_id": "system", 00:11:39.909 "dma_device_type": 1 00:11:39.909 }, 00:11:39.909 { 00:11:39.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.909 "dma_device_type": 2 00:11:39.909 } 00:11:39.909 ], 00:11:39.909 "driver_specific": { 00:11:39.909 "passthru": { 00:11:39.909 "name": "Passthru0", 00:11:39.909 "base_bdev_name": "Malloc0" 00:11:39.909 } 00:11:39.909 } 00:11:39.909 } 00:11:39.909 ]' 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.909 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.909 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:39.910 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:39.910 ************************************ 00:11:39.910 END TEST rpc_integrity 00:11:39.910 ************************************ 00:11:39.910 22:53:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:39.910 00:11:39.910 real 0m0.351s 00:11:39.910 user 0m0.192s 00:11:39.910 sys 0m0.060s 00:11:39.910 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.910 22:53:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.169 22:53:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:40.169 22:53:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.169 22:53:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.169 22:53:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.169 ************************************ 00:11:40.169 START TEST rpc_plugins 00:11:40.169 ************************************ 00:11:40.169 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:11:40.169 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:40.169 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.169 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:40.170 { 00:11:40.170 "name": "Malloc1", 00:11:40.170 "aliases": [ 00:11:40.170 "074a63ee-0d43-4df8-8eca-5988422fab23" 00:11:40.170 ], 00:11:40.170 "product_name": "Malloc disk", 00:11:40.170 "block_size": 4096, 00:11:40.170 "num_blocks": 256, 00:11:40.170 "uuid": "074a63ee-0d43-4df8-8eca-5988422fab23", 00:11:40.170 "assigned_rate_limits": { 00:11:40.170 "rw_ios_per_sec": 0, 00:11:40.170 "rw_mbytes_per_sec": 0, 00:11:40.170 "r_mbytes_per_sec": 0, 00:11:40.170 "w_mbytes_per_sec": 0 00:11:40.170 }, 00:11:40.170 "claimed": false, 00:11:40.170 "zoned": false, 00:11:40.170 "supported_io_types": { 00:11:40.170 "read": true, 00:11:40.170 "write": true, 00:11:40.170 "unmap": true, 00:11:40.170 "flush": true, 00:11:40.170 "reset": true, 00:11:40.170 "nvme_admin": false, 00:11:40.170 "nvme_io": false, 00:11:40.170 "nvme_io_md": false, 00:11:40.170 "write_zeroes": true, 00:11:40.170 "zcopy": true, 00:11:40.170 "get_zone_info": false, 00:11:40.170 "zone_management": false, 00:11:40.170 "zone_append": false, 00:11:40.170 "compare": false, 00:11:40.170 "compare_and_write": false, 00:11:40.170 "abort": true, 00:11:40.170 "seek_hole": false, 00:11:40.170 "seek_data": false, 00:11:40.170 "copy": true, 00:11:40.170 "nvme_iov_md": false 00:11:40.170 }, 00:11:40.170 "memory_domains": [ 00:11:40.170 { 00:11:40.170 "dma_device_id": "system", 00:11:40.170 "dma_device_type": 1 00:11:40.170 }, 00:11:40.170 { 00:11:40.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.170 "dma_device_type": 2 00:11:40.170 } 00:11:40.170 ], 00:11:40.170 "driver_specific": {} 00:11:40.170 } 00:11:40.170 ]' 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:40.170 ************************************ 00:11:40.170 END TEST rpc_plugins 00:11:40.170 ************************************ 00:11:40.170 22:53:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:40.170 00:11:40.170 real 0m0.170s 00:11:40.170 user 0m0.090s 00:11:40.170 sys 0m0.034s 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.170 22:53:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:40.429 22:53:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:40.429 22:53:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.429 22:53:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.429 22:53:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.429 ************************************ 00:11:40.429 START TEST rpc_trace_cmd_test 00:11:40.429 ************************************ 00:11:40.429 22:53:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:11:40.429 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:40.429 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:40.429 22:53:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.429 22:53:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:40.430 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57998", 00:11:40.430 "tpoint_group_mask": "0x8", 00:11:40.430 "iscsi_conn": { 00:11:40.430 "mask": "0x2", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "scsi": { 00:11:40.430 "mask": "0x4", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "bdev": { 00:11:40.430 "mask": "0x8", 00:11:40.430 "tpoint_mask": "0xffffffffffffffff" 00:11:40.430 }, 00:11:40.430 "nvmf_rdma": { 00:11:40.430 "mask": "0x10", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "nvmf_tcp": { 00:11:40.430 "mask": "0x20", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "ftl": { 00:11:40.430 "mask": "0x40", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "blobfs": { 00:11:40.430 "mask": "0x80", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "dsa": { 00:11:40.430 "mask": "0x200", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "thread": { 00:11:40.430 "mask": "0x400", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "nvme_pcie": { 00:11:40.430 "mask": "0x800", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "iaa": { 00:11:40.430 "mask": "0x1000", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "nvme_tcp": { 00:11:40.430 "mask": "0x2000", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "bdev_nvme": { 00:11:40.430 "mask": "0x4000", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "sock": { 00:11:40.430 "mask": "0x8000", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "blob": { 00:11:40.430 "mask": "0x10000", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "bdev_raid": { 00:11:40.430 "mask": "0x20000", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 }, 00:11:40.430 "scheduler": { 00:11:40.430 "mask": "0x40000", 00:11:40.430 "tpoint_mask": "0x0" 00:11:40.430 } 00:11:40.430 }' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:40.430 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:40.689 ************************************ 00:11:40.689 END TEST rpc_trace_cmd_test 00:11:40.689 ************************************ 00:11:40.689 22:53:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:40.689 00:11:40.689 real 0m0.230s 00:11:40.689 user 0m0.178s 00:11:40.689 sys 0m0.041s 00:11:40.689 22:53:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.689 22:53:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.689 22:53:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:40.689 22:53:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:40.689 22:53:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:40.689 22:53:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:40.689 22:53:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.689 22:53:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.689 ************************************ 00:11:40.689 START TEST rpc_daemon_integrity 00:11:40.689 ************************************ 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:40.689 { 00:11:40.689 "name": "Malloc2", 00:11:40.689 "aliases": [ 00:11:40.689 "6c86cf0f-f03f-4a64-8a04-af4d4adf94e4" 00:11:40.689 ], 00:11:40.689 "product_name": "Malloc disk", 00:11:40.689 "block_size": 512, 00:11:40.689 "num_blocks": 16384, 00:11:40.689 "uuid": "6c86cf0f-f03f-4a64-8a04-af4d4adf94e4", 00:11:40.689 "assigned_rate_limits": { 00:11:40.689 "rw_ios_per_sec": 0, 00:11:40.689 "rw_mbytes_per_sec": 0, 00:11:40.689 "r_mbytes_per_sec": 0, 00:11:40.689 "w_mbytes_per_sec": 0 00:11:40.689 }, 00:11:40.689 "claimed": false, 00:11:40.689 "zoned": false, 00:11:40.689 "supported_io_types": { 00:11:40.689 "read": true, 00:11:40.689 "write": true, 00:11:40.689 "unmap": true, 00:11:40.689 "flush": true, 00:11:40.689 "reset": true, 00:11:40.689 "nvme_admin": false, 00:11:40.689 "nvme_io": false, 00:11:40.689 "nvme_io_md": false, 00:11:40.689 "write_zeroes": true, 00:11:40.689 "zcopy": true, 00:11:40.689 "get_zone_info": false, 00:11:40.689 "zone_management": false, 00:11:40.689 "zone_append": false, 00:11:40.689 "compare": false, 00:11:40.689 "compare_and_write": false, 00:11:40.689 "abort": true, 00:11:40.689 "seek_hole": false, 00:11:40.689 "seek_data": false, 00:11:40.689 "copy": true, 00:11:40.689 "nvme_iov_md": false 00:11:40.689 }, 00:11:40.689 "memory_domains": [ 00:11:40.689 { 00:11:40.689 "dma_device_id": "system", 00:11:40.689 "dma_device_type": 1 00:11:40.689 }, 00:11:40.689 { 00:11:40.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.689 "dma_device_type": 2 00:11:40.689 } 00:11:40.689 ], 00:11:40.689 "driver_specific": {} 00:11:40.689 } 00:11:40.689 ]' 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.689 22:53:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.689 [2024-12-09 22:53:07.998017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:40.689 [2024-12-09 22:53:07.998206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.689 [2024-12-09 22:53:07.998238] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:40.689 [2024-12-09 22:53:07.998253] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.689 [2024-12-09 22:53:08.000805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.690 [2024-12-09 22:53:08.000849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:40.690 Passthru0 00:11:40.690 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.690 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:40.690 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.690 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:40.948 { 00:11:40.948 "name": "Malloc2", 00:11:40.948 "aliases": [ 00:11:40.948 "6c86cf0f-f03f-4a64-8a04-af4d4adf94e4" 00:11:40.948 ], 00:11:40.948 "product_name": "Malloc disk", 00:11:40.948 "block_size": 512, 00:11:40.948 "num_blocks": 16384, 00:11:40.948 "uuid": "6c86cf0f-f03f-4a64-8a04-af4d4adf94e4", 00:11:40.948 "assigned_rate_limits": { 00:11:40.948 "rw_ios_per_sec": 0, 00:11:40.948 "rw_mbytes_per_sec": 0, 00:11:40.948 "r_mbytes_per_sec": 0, 00:11:40.948 "w_mbytes_per_sec": 0 00:11:40.948 }, 00:11:40.948 "claimed": true, 00:11:40.948 "claim_type": "exclusive_write", 00:11:40.948 "zoned": false, 00:11:40.948 "supported_io_types": { 00:11:40.948 "read": true, 00:11:40.948 "write": true, 00:11:40.948 "unmap": true, 00:11:40.948 "flush": true, 00:11:40.948 "reset": true, 00:11:40.948 "nvme_admin": false, 00:11:40.948 "nvme_io": false, 00:11:40.948 "nvme_io_md": false, 00:11:40.948 "write_zeroes": true, 00:11:40.948 "zcopy": true, 00:11:40.948 "get_zone_info": false, 00:11:40.948 "zone_management": false, 00:11:40.948 "zone_append": false, 00:11:40.948 "compare": false, 00:11:40.948 "compare_and_write": false, 00:11:40.948 "abort": true, 00:11:40.948 "seek_hole": false, 00:11:40.948 "seek_data": false, 00:11:40.948 "copy": true, 00:11:40.948 "nvme_iov_md": false 00:11:40.948 }, 00:11:40.948 "memory_domains": [ 00:11:40.948 { 00:11:40.948 "dma_device_id": "system", 00:11:40.948 "dma_device_type": 1 00:11:40.948 }, 00:11:40.948 { 00:11:40.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.948 "dma_device_type": 2 00:11:40.948 } 00:11:40.948 ], 00:11:40.948 "driver_specific": {} 00:11:40.948 }, 00:11:40.948 { 00:11:40.948 "name": "Passthru0", 00:11:40.948 "aliases": [ 00:11:40.948 "9adebe8a-24d5-5559-a0cc-ce1facc44de2" 00:11:40.948 ], 00:11:40.948 "product_name": "passthru", 00:11:40.948 "block_size": 512, 00:11:40.948 "num_blocks": 16384, 00:11:40.948 "uuid": "9adebe8a-24d5-5559-a0cc-ce1facc44de2", 00:11:40.948 "assigned_rate_limits": { 00:11:40.948 "rw_ios_per_sec": 0, 00:11:40.948 "rw_mbytes_per_sec": 0, 00:11:40.948 "r_mbytes_per_sec": 0, 00:11:40.948 "w_mbytes_per_sec": 0 00:11:40.948 }, 00:11:40.948 "claimed": false, 00:11:40.948 "zoned": false, 00:11:40.948 "supported_io_types": { 00:11:40.948 "read": true, 00:11:40.948 "write": true, 00:11:40.948 "unmap": true, 00:11:40.948 "flush": true, 00:11:40.948 "reset": true, 00:11:40.948 "nvme_admin": false, 00:11:40.948 "nvme_io": false, 00:11:40.948 "nvme_io_md": false, 00:11:40.948 "write_zeroes": true, 00:11:40.948 "zcopy": true, 00:11:40.948 "get_zone_info": false, 00:11:40.948 "zone_management": false, 00:11:40.948 "zone_append": false, 00:11:40.948 "compare": false, 00:11:40.948 "compare_and_write": false, 00:11:40.948 "abort": true, 00:11:40.948 "seek_hole": false, 00:11:40.948 "seek_data": false, 00:11:40.948 "copy": true, 00:11:40.948 "nvme_iov_md": false 00:11:40.948 }, 00:11:40.948 "memory_domains": [ 00:11:40.948 { 00:11:40.948 "dma_device_id": "system", 00:11:40.948 "dma_device_type": 1 00:11:40.948 }, 00:11:40.948 { 00:11:40.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.948 "dma_device_type": 2 00:11:40.948 } 00:11:40.948 ], 00:11:40.948 "driver_specific": { 00:11:40.948 "passthru": { 00:11:40.948 "name": "Passthru0", 00:11:40.948 "base_bdev_name": "Malloc2" 00:11:40.948 } 00:11:40.948 } 00:11:40.948 } 00:11:40.948 ]' 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:40.948 ************************************ 00:11:40.948 END TEST rpc_daemon_integrity 00:11:40.948 ************************************ 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:40.948 00:11:40.948 real 0m0.341s 00:11:40.948 user 0m0.175s 00:11:40.948 sys 0m0.066s 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.948 22:53:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.948 22:53:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:40.948 22:53:08 rpc -- rpc/rpc.sh@84 -- # killprocess 57998 00:11:40.948 22:53:08 rpc -- common/autotest_common.sh@954 -- # '[' -z 57998 ']' 00:11:40.948 22:53:08 rpc -- common/autotest_common.sh@958 -- # kill -0 57998 00:11:40.948 22:53:08 rpc -- common/autotest_common.sh@959 -- # uname 00:11:40.948 22:53:08 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:40.948 22:53:08 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57998 00:11:41.206 killing process with pid 57998 00:11:41.206 22:53:08 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.206 22:53:08 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.206 22:53:08 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57998' 00:11:41.206 22:53:08 rpc -- common/autotest_common.sh@973 -- # kill 57998 00:11:41.206 22:53:08 rpc -- common/autotest_common.sh@978 -- # wait 57998 00:11:43.739 00:11:43.739 real 0m5.768s 00:11:43.739 user 0m6.078s 00:11:43.739 sys 0m1.165s 00:11:43.739 22:53:10 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.739 22:53:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.739 ************************************ 00:11:43.739 END TEST rpc 00:11:43.739 ************************************ 00:11:43.739 22:53:10 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:43.739 22:53:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.739 22:53:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.739 22:53:10 -- common/autotest_common.sh@10 -- # set +x 00:11:43.739 ************************************ 00:11:43.739 START TEST skip_rpc 00:11:43.739 ************************************ 00:11:43.739 22:53:10 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:43.998 * Looking for test storage... 00:11:43.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:43.998 22:53:11 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:43.998 22:53:11 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.998 22:53:11 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:43.998 22:53:11 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.998 22:53:11 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@345 -- # : 1 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.999 22:53:11 skip_rpc -- scripts/common.sh@368 -- # return 0 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.999 --rc genhtml_branch_coverage=1 00:11:43.999 --rc genhtml_function_coverage=1 00:11:43.999 --rc genhtml_legend=1 00:11:43.999 --rc geninfo_all_blocks=1 00:11:43.999 --rc geninfo_unexecuted_blocks=1 00:11:43.999 00:11:43.999 ' 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.999 --rc genhtml_branch_coverage=1 00:11:43.999 --rc genhtml_function_coverage=1 00:11:43.999 --rc genhtml_legend=1 00:11:43.999 --rc geninfo_all_blocks=1 00:11:43.999 --rc geninfo_unexecuted_blocks=1 00:11:43.999 00:11:43.999 ' 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.999 --rc genhtml_branch_coverage=1 00:11:43.999 --rc genhtml_function_coverage=1 00:11:43.999 --rc genhtml_legend=1 00:11:43.999 --rc geninfo_all_blocks=1 00:11:43.999 --rc geninfo_unexecuted_blocks=1 00:11:43.999 00:11:43.999 ' 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.999 --rc genhtml_branch_coverage=1 00:11:43.999 --rc genhtml_function_coverage=1 00:11:43.999 --rc genhtml_legend=1 00:11:43.999 --rc geninfo_all_blocks=1 00:11:43.999 --rc geninfo_unexecuted_blocks=1 00:11:43.999 00:11:43.999 ' 00:11:43.999 22:53:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:43.999 22:53:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:43.999 22:53:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.999 22:53:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.999 ************************************ 00:11:43.999 START TEST skip_rpc 00:11:43.999 ************************************ 00:11:43.999 22:53:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:11:43.999 22:53:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58238 00:11:43.999 22:53:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:43.999 22:53:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:43.999 22:53:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:44.258 [2024-12-09 22:53:11.339761] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:11:44.258 [2024-12-09 22:53:11.339908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58238 ] 00:11:44.258 [2024-12-09 22:53:11.527799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.517 [2024-12-09 22:53:11.673936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58238 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58238 ']' 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58238 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58238 00:11:49.787 killing process with pid 58238 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58238' 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58238 00:11:49.787 22:53:16 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58238 00:11:51.738 00:11:51.738 real 0m7.668s 00:11:51.738 user 0m7.008s 00:11:51.738 sys 0m0.567s 00:11:51.738 ************************************ 00:11:51.738 END TEST skip_rpc 00:11:51.738 ************************************ 00:11:51.738 22:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.738 22:53:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.738 22:53:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:51.738 22:53:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.738 22:53:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.738 22:53:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.738 ************************************ 00:11:51.738 START TEST skip_rpc_with_json 00:11:51.738 ************************************ 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58342 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58342 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58342 ']' 00:11:51.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.738 22:53:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:51.997 [2024-12-09 22:53:19.073710] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:11:51.997 [2024-12-09 22:53:19.074068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58342 ] 00:11:51.997 [2024-12-09 22:53:19.259718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.256 [2024-12-09 22:53:19.403211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:53.193 [2024-12-09 22:53:20.431417] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:53.193 request: 00:11:53.193 { 00:11:53.193 "trtype": "tcp", 00:11:53.193 "method": "nvmf_get_transports", 00:11:53.193 "req_id": 1 00:11:53.193 } 00:11:53.193 Got JSON-RPC error response 00:11:53.193 response: 00:11:53.193 { 00:11:53.193 "code": -19, 00:11:53.193 "message": "No such device" 00:11:53.193 } 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:53.193 [2024-12-09 22:53:20.447542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.193 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:53.452 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.452 22:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:53.452 { 00:11:53.452 "subsystems": [ 00:11:53.452 { 00:11:53.452 "subsystem": "fsdev", 00:11:53.452 "config": [ 00:11:53.452 { 00:11:53.452 "method": "fsdev_set_opts", 00:11:53.452 "params": { 00:11:53.452 "fsdev_io_pool_size": 65535, 00:11:53.452 "fsdev_io_cache_size": 256 00:11:53.452 } 00:11:53.452 } 00:11:53.452 ] 00:11:53.452 }, 00:11:53.452 { 00:11:53.452 "subsystem": "keyring", 00:11:53.452 "config": [] 00:11:53.452 }, 00:11:53.452 { 00:11:53.452 "subsystem": "iobuf", 00:11:53.452 "config": [ 00:11:53.452 { 00:11:53.452 "method": "iobuf_set_options", 00:11:53.452 "params": { 00:11:53.452 "small_pool_count": 8192, 00:11:53.452 "large_pool_count": 1024, 00:11:53.452 "small_bufsize": 8192, 00:11:53.452 "large_bufsize": 135168, 00:11:53.452 "enable_numa": false 00:11:53.452 } 00:11:53.452 } 00:11:53.452 ] 00:11:53.452 }, 00:11:53.452 { 00:11:53.452 "subsystem": "sock", 00:11:53.452 "config": [ 00:11:53.452 { 00:11:53.452 "method": "sock_set_default_impl", 00:11:53.452 "params": { 00:11:53.452 "impl_name": "posix" 00:11:53.452 } 00:11:53.452 }, 00:11:53.452 { 00:11:53.452 "method": "sock_impl_set_options", 00:11:53.452 "params": { 00:11:53.452 "impl_name": "ssl", 00:11:53.452 "recv_buf_size": 4096, 00:11:53.452 "send_buf_size": 4096, 00:11:53.452 "enable_recv_pipe": true, 00:11:53.452 "enable_quickack": false, 00:11:53.452 "enable_placement_id": 0, 00:11:53.452 "enable_zerocopy_send_server": true, 00:11:53.452 "enable_zerocopy_send_client": false, 00:11:53.452 "zerocopy_threshold": 0, 00:11:53.452 "tls_version": 0, 00:11:53.452 "enable_ktls": false 00:11:53.452 } 00:11:53.452 }, 00:11:53.452 { 00:11:53.452 "method": "sock_impl_set_options", 00:11:53.452 "params": { 00:11:53.452 "impl_name": "posix", 00:11:53.452 "recv_buf_size": 2097152, 00:11:53.452 "send_buf_size": 2097152, 00:11:53.452 "enable_recv_pipe": true, 00:11:53.452 "enable_quickack": false, 00:11:53.452 "enable_placement_id": 0, 00:11:53.452 "enable_zerocopy_send_server": true, 00:11:53.452 "enable_zerocopy_send_client": false, 00:11:53.452 "zerocopy_threshold": 0, 00:11:53.452 "tls_version": 0, 00:11:53.452 "enable_ktls": false 00:11:53.452 } 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "vmd", 00:11:53.453 "config": [] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "accel", 00:11:53.453 "config": [ 00:11:53.453 { 00:11:53.453 "method": "accel_set_options", 00:11:53.453 "params": { 00:11:53.453 "small_cache_size": 128, 00:11:53.453 "large_cache_size": 16, 00:11:53.453 "task_count": 2048, 00:11:53.453 "sequence_count": 2048, 00:11:53.453 "buf_count": 2048 00:11:53.453 } 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "bdev", 00:11:53.453 "config": [ 00:11:53.453 { 00:11:53.453 "method": "bdev_set_options", 00:11:53.453 "params": { 00:11:53.453 "bdev_io_pool_size": 65535, 00:11:53.453 "bdev_io_cache_size": 256, 00:11:53.453 "bdev_auto_examine": true, 00:11:53.453 "iobuf_small_cache_size": 128, 00:11:53.453 "iobuf_large_cache_size": 16 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "bdev_raid_set_options", 00:11:53.453 "params": { 00:11:53.453 "process_window_size_kb": 1024, 00:11:53.453 "process_max_bandwidth_mb_sec": 0 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "bdev_iscsi_set_options", 00:11:53.453 "params": { 00:11:53.453 "timeout_sec": 30 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "bdev_nvme_set_options", 00:11:53.453 "params": { 00:11:53.453 "action_on_timeout": "none", 00:11:53.453 "timeout_us": 0, 00:11:53.453 "timeout_admin_us": 0, 00:11:53.453 "keep_alive_timeout_ms": 10000, 00:11:53.453 "arbitration_burst": 0, 00:11:53.453 "low_priority_weight": 0, 00:11:53.453 "medium_priority_weight": 0, 00:11:53.453 "high_priority_weight": 0, 00:11:53.453 "nvme_adminq_poll_period_us": 10000, 00:11:53.453 "nvme_ioq_poll_period_us": 0, 00:11:53.453 "io_queue_requests": 0, 00:11:53.453 "delay_cmd_submit": true, 00:11:53.453 "transport_retry_count": 4, 00:11:53.453 "bdev_retry_count": 3, 00:11:53.453 "transport_ack_timeout": 0, 00:11:53.453 "ctrlr_loss_timeout_sec": 0, 00:11:53.453 "reconnect_delay_sec": 0, 00:11:53.453 "fast_io_fail_timeout_sec": 0, 00:11:53.453 "disable_auto_failback": false, 00:11:53.453 "generate_uuids": false, 00:11:53.453 "transport_tos": 0, 00:11:53.453 "nvme_error_stat": false, 00:11:53.453 "rdma_srq_size": 0, 00:11:53.453 "io_path_stat": false, 00:11:53.453 "allow_accel_sequence": false, 00:11:53.453 "rdma_max_cq_size": 0, 00:11:53.453 "rdma_cm_event_timeout_ms": 0, 00:11:53.453 "dhchap_digests": [ 00:11:53.453 "sha256", 00:11:53.453 "sha384", 00:11:53.453 "sha512" 00:11:53.453 ], 00:11:53.453 "dhchap_dhgroups": [ 00:11:53.453 "null", 00:11:53.453 "ffdhe2048", 00:11:53.453 "ffdhe3072", 00:11:53.453 "ffdhe4096", 00:11:53.453 "ffdhe6144", 00:11:53.453 "ffdhe8192" 00:11:53.453 ] 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "bdev_nvme_set_hotplug", 00:11:53.453 "params": { 00:11:53.453 "period_us": 100000, 00:11:53.453 "enable": false 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "bdev_wait_for_examine" 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "scsi", 00:11:53.453 "config": null 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "scheduler", 00:11:53.453 "config": [ 00:11:53.453 { 00:11:53.453 "method": "framework_set_scheduler", 00:11:53.453 "params": { 00:11:53.453 "name": "static" 00:11:53.453 } 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "vhost_scsi", 00:11:53.453 "config": [] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "vhost_blk", 00:11:53.453 "config": [] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "ublk", 00:11:53.453 "config": [] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "nbd", 00:11:53.453 "config": [] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "nvmf", 00:11:53.453 "config": [ 00:11:53.453 { 00:11:53.453 "method": "nvmf_set_config", 00:11:53.453 "params": { 00:11:53.453 "discovery_filter": "match_any", 00:11:53.453 "admin_cmd_passthru": { 00:11:53.453 "identify_ctrlr": false 00:11:53.453 }, 00:11:53.453 "dhchap_digests": [ 00:11:53.453 "sha256", 00:11:53.453 "sha384", 00:11:53.453 "sha512" 00:11:53.453 ], 00:11:53.453 "dhchap_dhgroups": [ 00:11:53.453 "null", 00:11:53.453 "ffdhe2048", 00:11:53.453 "ffdhe3072", 00:11:53.453 "ffdhe4096", 00:11:53.453 "ffdhe6144", 00:11:53.453 "ffdhe8192" 00:11:53.453 ] 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "nvmf_set_max_subsystems", 00:11:53.453 "params": { 00:11:53.453 "max_subsystems": 1024 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "nvmf_set_crdt", 00:11:53.453 "params": { 00:11:53.453 "crdt1": 0, 00:11:53.453 "crdt2": 0, 00:11:53.453 "crdt3": 0 00:11:53.453 } 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "method": "nvmf_create_transport", 00:11:53.453 "params": { 00:11:53.453 "trtype": "TCP", 00:11:53.453 "max_queue_depth": 128, 00:11:53.453 "max_io_qpairs_per_ctrlr": 127, 00:11:53.453 "in_capsule_data_size": 4096, 00:11:53.453 "max_io_size": 131072, 00:11:53.453 "io_unit_size": 131072, 00:11:53.453 "max_aq_depth": 128, 00:11:53.453 "num_shared_buffers": 511, 00:11:53.453 "buf_cache_size": 4294967295, 00:11:53.453 "dif_insert_or_strip": false, 00:11:53.453 "zcopy": false, 00:11:53.453 "c2h_success": true, 00:11:53.453 "sock_priority": 0, 00:11:53.453 "abort_timeout_sec": 1, 00:11:53.453 "ack_timeout": 0, 00:11:53.453 "data_wr_pool_size": 0 00:11:53.453 } 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 }, 00:11:53.453 { 00:11:53.453 "subsystem": "iscsi", 00:11:53.453 "config": [ 00:11:53.453 { 00:11:53.453 "method": "iscsi_set_options", 00:11:53.453 "params": { 00:11:53.453 "node_base": "iqn.2016-06.io.spdk", 00:11:53.453 "max_sessions": 128, 00:11:53.453 "max_connections_per_session": 2, 00:11:53.453 "max_queue_depth": 64, 00:11:53.453 "default_time2wait": 2, 00:11:53.453 "default_time2retain": 20, 00:11:53.453 "first_burst_length": 8192, 00:11:53.453 "immediate_data": true, 00:11:53.453 "allow_duplicated_isid": false, 00:11:53.453 "error_recovery_level": 0, 00:11:53.453 "nop_timeout": 60, 00:11:53.453 "nop_in_interval": 30, 00:11:53.453 "disable_chap": false, 00:11:53.453 "require_chap": false, 00:11:53.453 "mutual_chap": false, 00:11:53.453 "chap_group": 0, 00:11:53.453 "max_large_datain_per_connection": 64, 00:11:53.453 "max_r2t_per_connection": 4, 00:11:53.453 "pdu_pool_size": 36864, 00:11:53.453 "immediate_data_pool_size": 16384, 00:11:53.453 "data_out_pool_size": 2048 00:11:53.453 } 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 } 00:11:53.453 ] 00:11:53.453 } 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58342 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58342 ']' 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58342 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58342 00:11:53.453 killing process with pid 58342 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58342' 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58342 00:11:53.453 22:53:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58342 00:11:55.986 22:53:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58398 00:11:55.986 22:53:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:55.986 22:53:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58398 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58398 ']' 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58398 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58398 00:12:01.307 killing process with pid 58398 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58398' 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58398 00:12:01.307 22:53:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58398 00:12:03.836 22:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:03.836 22:53:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:03.836 00:12:03.836 real 0m11.987s 00:12:03.836 user 0m11.162s 00:12:03.836 sys 0m1.144s 00:12:03.836 22:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.836 22:53:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:03.836 ************************************ 00:12:03.836 END TEST skip_rpc_with_json 00:12:03.836 ************************************ 00:12:03.836 22:53:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:12:03.836 22:53:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:03.836 22:53:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.836 22:53:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.836 ************************************ 00:12:03.836 START TEST skip_rpc_with_delay 00:12:03.836 ************************************ 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:03.836 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:03.836 [2024-12-09 22:53:31.156827] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:12:04.094 ************************************ 00:12:04.094 END TEST skip_rpc_with_delay 00:12:04.094 ************************************ 00:12:04.094 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:12:04.094 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.094 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.094 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.094 00:12:04.094 real 0m0.200s 00:12:04.094 user 0m0.091s 00:12:04.094 sys 0m0.108s 00:12:04.094 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.094 22:53:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:12:04.094 22:53:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:12:04.094 22:53:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:12:04.095 22:53:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:12:04.095 22:53:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.095 22:53:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.095 22:53:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.095 ************************************ 00:12:04.095 START TEST exit_on_failed_rpc_init 00:12:04.095 ************************************ 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58537 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58537 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58537 ']' 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.095 22:53:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:04.095 [2024-12-09 22:53:31.425517] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:04.095 [2024-12-09 22:53:31.425655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58537 ] 00:12:04.353 [2024-12-09 22:53:31.606320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.611 [2024-12-09 22:53:31.752102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:05.544 22:53:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:05.544 [2024-12-09 22:53:32.876109] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:05.544 [2024-12-09 22:53:32.876257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58555 ] 00:12:05.803 [2024-12-09 22:53:33.063898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.074 [2024-12-09 22:53:33.205443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.074 [2024-12-09 22:53:33.205757] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:06.074 [2024-12-09 22:53:33.205782] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:06.074 [2024-12-09 22:53:33.205804] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58537 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58537 ']' 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58537 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58537 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58537' 00:12:06.331 killing process with pid 58537 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58537 00:12:06.331 22:53:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58537 00:12:08.859 00:12:08.859 real 0m4.850s 00:12:08.859 user 0m5.054s 00:12:08.859 sys 0m0.775s 00:12:08.859 ************************************ 00:12:08.859 END TEST exit_on_failed_rpc_init 00:12:08.859 ************************************ 00:12:08.859 22:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.859 22:53:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:09.117 22:53:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:09.117 00:12:09.117 real 0m25.254s 00:12:09.117 user 0m23.545s 00:12:09.117 sys 0m2.894s 00:12:09.117 22:53:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.117 ************************************ 00:12:09.117 22:53:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.117 END TEST skip_rpc 00:12:09.117 ************************************ 00:12:09.117 22:53:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:09.117 22:53:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.117 22:53:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.117 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:12:09.117 ************************************ 00:12:09.117 START TEST rpc_client 00:12:09.117 ************************************ 00:12:09.117 22:53:36 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:09.117 * Looking for test storage... 00:12:09.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:12:09.117 22:53:36 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:09.117 22:53:36 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:12:09.117 22:53:36 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.375 22:53:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:09.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.375 --rc genhtml_branch_coverage=1 00:12:09.375 --rc genhtml_function_coverage=1 00:12:09.375 --rc genhtml_legend=1 00:12:09.375 --rc geninfo_all_blocks=1 00:12:09.375 --rc geninfo_unexecuted_blocks=1 00:12:09.375 00:12:09.375 ' 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:09.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.375 --rc genhtml_branch_coverage=1 00:12:09.375 --rc genhtml_function_coverage=1 00:12:09.375 --rc genhtml_legend=1 00:12:09.375 --rc geninfo_all_blocks=1 00:12:09.375 --rc geninfo_unexecuted_blocks=1 00:12:09.375 00:12:09.375 ' 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:09.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.375 --rc genhtml_branch_coverage=1 00:12:09.375 --rc genhtml_function_coverage=1 00:12:09.375 --rc genhtml_legend=1 00:12:09.375 --rc geninfo_all_blocks=1 00:12:09.375 --rc geninfo_unexecuted_blocks=1 00:12:09.375 00:12:09.375 ' 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:09.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.375 --rc genhtml_branch_coverage=1 00:12:09.375 --rc genhtml_function_coverage=1 00:12:09.375 --rc genhtml_legend=1 00:12:09.375 --rc geninfo_all_blocks=1 00:12:09.375 --rc geninfo_unexecuted_blocks=1 00:12:09.375 00:12:09.375 ' 00:12:09.375 22:53:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:12:09.375 OK 00:12:09.375 22:53:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:09.375 00:12:09.375 real 0m0.313s 00:12:09.375 user 0m0.155s 00:12:09.375 sys 0m0.176s 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.375 22:53:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:12:09.375 ************************************ 00:12:09.375 END TEST rpc_client 00:12:09.375 ************************************ 00:12:09.375 22:53:36 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:09.375 22:53:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.375 22:53:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.375 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:12:09.375 ************************************ 00:12:09.375 START TEST json_config 00:12:09.375 ************************************ 00:12:09.375 22:53:36 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:09.633 22:53:36 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:09.633 22:53:36 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:12:09.633 22:53:36 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:09.633 22:53:36 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.633 22:53:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.633 22:53:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.633 22:53:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.633 22:53:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.633 22:53:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.633 22:53:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.633 22:53:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.633 22:53:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:12:09.633 22:53:36 json_config -- scripts/common.sh@345 -- # : 1 00:12:09.633 22:53:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.633 22:53:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.633 22:53:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:12:09.633 22:53:36 json_config -- scripts/common.sh@353 -- # local d=1 00:12:09.633 22:53:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.633 22:53:36 json_config -- scripts/common.sh@355 -- # echo 1 00:12:09.633 22:53:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.633 22:53:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@353 -- # local d=2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.633 22:53:36 json_config -- scripts/common.sh@355 -- # echo 2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.633 22:53:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.633 22:53:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.633 22:53:36 json_config -- scripts/common.sh@368 -- # return 0 00:12:09.633 22:53:36 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.633 22:53:36 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:09.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.633 --rc genhtml_branch_coverage=1 00:12:09.633 --rc genhtml_function_coverage=1 00:12:09.634 --rc genhtml_legend=1 00:12:09.634 --rc geninfo_all_blocks=1 00:12:09.634 --rc geninfo_unexecuted_blocks=1 00:12:09.634 00:12:09.634 ' 00:12:09.634 22:53:36 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:09.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.634 --rc genhtml_branch_coverage=1 00:12:09.634 --rc genhtml_function_coverage=1 00:12:09.634 --rc genhtml_legend=1 00:12:09.634 --rc geninfo_all_blocks=1 00:12:09.634 --rc geninfo_unexecuted_blocks=1 00:12:09.634 00:12:09.634 ' 00:12:09.634 22:53:36 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:09.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.634 --rc genhtml_branch_coverage=1 00:12:09.634 --rc genhtml_function_coverage=1 00:12:09.634 --rc genhtml_legend=1 00:12:09.634 --rc geninfo_all_blocks=1 00:12:09.634 --rc geninfo_unexecuted_blocks=1 00:12:09.634 00:12:09.634 ' 00:12:09.634 22:53:36 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:09.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.634 --rc genhtml_branch_coverage=1 00:12:09.634 --rc genhtml_function_coverage=1 00:12:09.634 --rc genhtml_legend=1 00:12:09.634 --rc geninfo_all_blocks=1 00:12:09.634 --rc geninfo_unexecuted_blocks=1 00:12:09.634 00:12:09.634 ' 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f63ccfb0-8e1a-4e3a-81ed-f5c6f2fe319a 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f63ccfb0-8e1a-4e3a-81ed-f5c6f2fe319a 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.634 22:53:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.634 22:53:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.634 22:53:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.634 22:53:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.634 22:53:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.634 22:53:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.634 22:53:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.634 22:53:36 json_config -- paths/export.sh@5 -- # export PATH 00:12:09.634 22:53:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@51 -- # : 0 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.634 22:53:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:12:09.634 WARNING: No tests are enabled so not running JSON configuration tests 00:12:09.634 22:53:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:12:09.634 00:12:09.634 real 0m0.228s 00:12:09.634 user 0m0.137s 00:12:09.634 sys 0m0.091s 00:12:09.634 22:53:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.634 22:53:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:09.634 ************************************ 00:12:09.634 END TEST json_config 00:12:09.634 ************************************ 00:12:09.893 22:53:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:09.893 22:53:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:09.893 22:53:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.893 22:53:36 -- common/autotest_common.sh@10 -- # set +x 00:12:09.893 ************************************ 00:12:09.893 START TEST json_config_extra_key 00:12:09.893 ************************************ 00:12:09.893 22:53:36 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.893 --rc genhtml_branch_coverage=1 00:12:09.893 --rc genhtml_function_coverage=1 00:12:09.893 --rc genhtml_legend=1 00:12:09.893 --rc geninfo_all_blocks=1 00:12:09.893 --rc geninfo_unexecuted_blocks=1 00:12:09.893 00:12:09.893 ' 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.893 --rc genhtml_branch_coverage=1 00:12:09.893 --rc genhtml_function_coverage=1 00:12:09.893 --rc genhtml_legend=1 00:12:09.893 --rc geninfo_all_blocks=1 00:12:09.893 --rc geninfo_unexecuted_blocks=1 00:12:09.893 00:12:09.893 ' 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.893 --rc genhtml_branch_coverage=1 00:12:09.893 --rc genhtml_function_coverage=1 00:12:09.893 --rc genhtml_legend=1 00:12:09.893 --rc geninfo_all_blocks=1 00:12:09.893 --rc geninfo_unexecuted_blocks=1 00:12:09.893 00:12:09.893 ' 00:12:09.893 22:53:37 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:09.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.893 --rc genhtml_branch_coverage=1 00:12:09.893 --rc genhtml_function_coverage=1 00:12:09.893 --rc genhtml_legend=1 00:12:09.893 --rc geninfo_all_blocks=1 00:12:09.893 --rc geninfo_unexecuted_blocks=1 00:12:09.893 00:12:09.893 ' 00:12:09.893 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f63ccfb0-8e1a-4e3a-81ed-f5c6f2fe319a 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f63ccfb0-8e1a-4e3a-81ed-f5c6f2fe319a 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.893 22:53:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.893 22:53:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.893 22:53:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.893 22:53:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.894 22:53:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.894 22:53:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:09.894 22:53:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.894 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.894 22:53:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:09.894 INFO: launching applications... 00:12:09.894 22:53:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58771 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:09.894 Waiting for target to run... 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58771 /var/tmp/spdk_tgt.sock 00:12:09.894 22:53:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:09.894 22:53:37 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58771 ']' 00:12:09.894 22:53:37 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:09.894 22:53:37 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.894 22:53:37 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:09.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:09.894 22:53:37 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.894 22:53:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:10.152 [2024-12-09 22:53:37.318324] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:10.152 [2024-12-09 22:53:37.318681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58771 ] 00:12:10.717 [2024-12-09 22:53:37.869984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.717 [2024-12-09 22:53:37.991210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.650 00:12:11.650 INFO: shutting down applications... 00:12:11.650 22:53:38 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.650 22:53:38 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:11.650 22:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:11.650 22:53:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58771 ]] 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58771 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58771 00:12:11.650 22:53:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:12.304 22:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:12.304 22:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:12.304 22:53:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58771 00:12:12.304 22:53:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:12.562 22:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:12.562 22:53:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:12.562 22:53:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58771 00:12:12.562 22:53:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:13.129 22:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:13.129 22:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:13.129 22:53:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58771 00:12:13.129 22:53:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:13.694 22:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:13.694 22:53:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:13.694 22:53:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58771 00:12:13.694 22:53:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:14.259 22:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:14.259 22:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:14.259 22:53:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58771 00:12:14.259 22:53:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:14.517 22:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:14.517 22:53:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:14.517 22:53:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58771 00:12:14.517 SPDK target shutdown done 00:12:14.517 Success 00:12:14.517 22:53:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:14.517 22:53:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:14.517 22:53:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:14.517 22:53:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:14.517 22:53:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:14.517 00:12:14.517 real 0m4.831s 00:12:14.517 user 0m4.382s 00:12:14.517 sys 0m0.753s 00:12:14.517 22:53:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.517 22:53:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:14.517 ************************************ 00:12:14.517 END TEST json_config_extra_key 00:12:14.517 ************************************ 00:12:14.774 22:53:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:14.774 22:53:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.774 22:53:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.774 22:53:41 -- common/autotest_common.sh@10 -- # set +x 00:12:14.774 ************************************ 00:12:14.774 START TEST alias_rpc 00:12:14.774 ************************************ 00:12:14.774 22:53:41 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:14.774 * Looking for test storage... 00:12:14.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:12:14.774 22:53:42 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.774 22:53:42 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.774 22:53:42 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.774 22:53:42 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@345 -- # : 1 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.774 22:53:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:15.031 22:53:42 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:12:15.031 22:53:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.031 22:53:42 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:12:15.031 22:53:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.031 22:53:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.031 22:53:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.031 22:53:42 alias_rpc -- scripts/common.sh@368 -- # return 0 00:12:15.031 22:53:42 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.031 22:53:42 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:15.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.031 --rc genhtml_branch_coverage=1 00:12:15.031 --rc genhtml_function_coverage=1 00:12:15.031 --rc genhtml_legend=1 00:12:15.031 --rc geninfo_all_blocks=1 00:12:15.031 --rc geninfo_unexecuted_blocks=1 00:12:15.031 00:12:15.031 ' 00:12:15.031 22:53:42 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:15.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.031 --rc genhtml_branch_coverage=1 00:12:15.031 --rc genhtml_function_coverage=1 00:12:15.031 --rc genhtml_legend=1 00:12:15.031 --rc geninfo_all_blocks=1 00:12:15.031 --rc geninfo_unexecuted_blocks=1 00:12:15.031 00:12:15.031 ' 00:12:15.031 22:53:42 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:15.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.031 --rc genhtml_branch_coverage=1 00:12:15.031 --rc genhtml_function_coverage=1 00:12:15.031 --rc genhtml_legend=1 00:12:15.031 --rc geninfo_all_blocks=1 00:12:15.031 --rc geninfo_unexecuted_blocks=1 00:12:15.031 00:12:15.031 ' 00:12:15.031 22:53:42 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:15.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.031 --rc genhtml_branch_coverage=1 00:12:15.031 --rc genhtml_function_coverage=1 00:12:15.031 --rc genhtml_legend=1 00:12:15.031 --rc geninfo_all_blocks=1 00:12:15.031 --rc geninfo_unexecuted_blocks=1 00:12:15.031 00:12:15.031 ' 00:12:15.031 22:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:15.031 22:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58888 00:12:15.032 22:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:15.032 22:53:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58888 00:12:15.032 22:53:42 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58888 ']' 00:12:15.032 22:53:42 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.032 22:53:42 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.032 22:53:42 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.032 22:53:42 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.032 22:53:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.032 [2024-12-09 22:53:42.222351] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:15.032 [2024-12-09 22:53:42.222799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ] 00:12:15.288 [2024-12-09 22:53:42.406331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.288 [2024-12-09 22:53:42.553820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:16.657 22:53:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:12:16.657 22:53:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58888 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58888 ']' 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58888 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58888 00:12:16.657 killing process with pid 58888 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58888' 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 58888 00:12:16.657 22:53:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 58888 00:12:19.207 ************************************ 00:12:19.207 END TEST alias_rpc 00:12:19.207 ************************************ 00:12:19.207 00:12:19.207 real 0m4.638s 00:12:19.207 user 0m4.474s 00:12:19.207 sys 0m0.744s 00:12:19.207 22:53:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.207 22:53:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.465 22:53:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:12:19.465 22:53:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:19.465 22:53:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.465 22:53:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.465 22:53:46 -- common/autotest_common.sh@10 -- # set +x 00:12:19.465 ************************************ 00:12:19.465 START TEST spdkcli_tcp 00:12:19.465 ************************************ 00:12:19.465 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:19.465 * Looking for test storage... 00:12:19.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:12:19.465 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.465 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.465 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.723 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.723 22:53:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:12:19.723 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.723 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.723 --rc genhtml_branch_coverage=1 00:12:19.723 --rc genhtml_function_coverage=1 00:12:19.723 --rc genhtml_legend=1 00:12:19.723 --rc geninfo_all_blocks=1 00:12:19.723 --rc geninfo_unexecuted_blocks=1 00:12:19.723 00:12:19.723 ' 00:12:19.723 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.723 --rc genhtml_branch_coverage=1 00:12:19.723 --rc genhtml_function_coverage=1 00:12:19.723 --rc genhtml_legend=1 00:12:19.723 --rc geninfo_all_blocks=1 00:12:19.723 --rc geninfo_unexecuted_blocks=1 00:12:19.723 00:12:19.723 ' 00:12:19.723 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.723 --rc genhtml_branch_coverage=1 00:12:19.723 --rc genhtml_function_coverage=1 00:12:19.723 --rc genhtml_legend=1 00:12:19.723 --rc geninfo_all_blocks=1 00:12:19.723 --rc geninfo_unexecuted_blocks=1 00:12:19.723 00:12:19.724 ' 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.724 --rc genhtml_branch_coverage=1 00:12:19.724 --rc genhtml_function_coverage=1 00:12:19.724 --rc genhtml_legend=1 00:12:19.724 --rc geninfo_all_blocks=1 00:12:19.724 --rc geninfo_unexecuted_blocks=1 00:12:19.724 00:12:19.724 ' 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59000 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:19.724 22:53:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59000 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59000 ']' 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.724 22:53:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.724 [2024-12-09 22:53:46.950089] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:19.724 [2024-12-09 22:53:46.950243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59000 ] 00:12:19.982 [2024-12-09 22:53:47.134499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.982 [2024-12-09 22:53:47.286427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.982 [2024-12-09 22:53:47.286488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.358 22:53:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.358 22:53:48 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:12:21.358 22:53:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:21.358 22:53:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59023 00:12:21.358 22:53:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:21.358 [ 00:12:21.358 "bdev_malloc_delete", 00:12:21.358 "bdev_malloc_create", 00:12:21.358 "bdev_null_resize", 00:12:21.358 "bdev_null_delete", 00:12:21.358 "bdev_null_create", 00:12:21.358 "bdev_nvme_cuse_unregister", 00:12:21.358 "bdev_nvme_cuse_register", 00:12:21.358 "bdev_opal_new_user", 00:12:21.358 "bdev_opal_set_lock_state", 00:12:21.358 "bdev_opal_delete", 00:12:21.358 "bdev_opal_get_info", 00:12:21.358 "bdev_opal_create", 00:12:21.358 "bdev_nvme_opal_revert", 00:12:21.358 "bdev_nvme_opal_init", 00:12:21.358 "bdev_nvme_send_cmd", 00:12:21.358 "bdev_nvme_set_keys", 00:12:21.358 "bdev_nvme_get_path_iostat", 00:12:21.358 "bdev_nvme_get_mdns_discovery_info", 00:12:21.358 "bdev_nvme_stop_mdns_discovery", 00:12:21.358 "bdev_nvme_start_mdns_discovery", 00:12:21.358 "bdev_nvme_set_multipath_policy", 00:12:21.358 "bdev_nvme_set_preferred_path", 00:12:21.358 "bdev_nvme_get_io_paths", 00:12:21.358 "bdev_nvme_remove_error_injection", 00:12:21.358 "bdev_nvme_add_error_injection", 00:12:21.358 "bdev_nvme_get_discovery_info", 00:12:21.358 "bdev_nvme_stop_discovery", 00:12:21.358 "bdev_nvme_start_discovery", 00:12:21.358 "bdev_nvme_get_controller_health_info", 00:12:21.358 "bdev_nvme_disable_controller", 00:12:21.358 "bdev_nvme_enable_controller", 00:12:21.358 "bdev_nvme_reset_controller", 00:12:21.358 "bdev_nvme_get_transport_statistics", 00:12:21.358 "bdev_nvme_apply_firmware", 00:12:21.358 "bdev_nvme_detach_controller", 00:12:21.358 "bdev_nvme_get_controllers", 00:12:21.358 "bdev_nvme_attach_controller", 00:12:21.358 "bdev_nvme_set_hotplug", 00:12:21.358 "bdev_nvme_set_options", 00:12:21.358 "bdev_passthru_delete", 00:12:21.358 "bdev_passthru_create", 00:12:21.358 "bdev_lvol_set_parent_bdev", 00:12:21.358 "bdev_lvol_set_parent", 00:12:21.358 "bdev_lvol_check_shallow_copy", 00:12:21.358 "bdev_lvol_start_shallow_copy", 00:12:21.358 "bdev_lvol_grow_lvstore", 00:12:21.358 "bdev_lvol_get_lvols", 00:12:21.358 "bdev_lvol_get_lvstores", 00:12:21.358 "bdev_lvol_delete", 00:12:21.358 "bdev_lvol_set_read_only", 00:12:21.358 "bdev_lvol_resize", 00:12:21.358 "bdev_lvol_decouple_parent", 00:12:21.358 "bdev_lvol_inflate", 00:12:21.358 "bdev_lvol_rename", 00:12:21.358 "bdev_lvol_clone_bdev", 00:12:21.358 "bdev_lvol_clone", 00:12:21.358 "bdev_lvol_snapshot", 00:12:21.358 "bdev_lvol_create", 00:12:21.358 "bdev_lvol_delete_lvstore", 00:12:21.358 "bdev_lvol_rename_lvstore", 00:12:21.358 "bdev_lvol_create_lvstore", 00:12:21.358 "bdev_raid_set_options", 00:12:21.358 "bdev_raid_remove_base_bdev", 00:12:21.358 "bdev_raid_add_base_bdev", 00:12:21.358 "bdev_raid_delete", 00:12:21.358 "bdev_raid_create", 00:12:21.358 "bdev_raid_get_bdevs", 00:12:21.358 "bdev_error_inject_error", 00:12:21.358 "bdev_error_delete", 00:12:21.358 "bdev_error_create", 00:12:21.358 "bdev_split_delete", 00:12:21.358 "bdev_split_create", 00:12:21.358 "bdev_delay_delete", 00:12:21.358 "bdev_delay_create", 00:12:21.358 "bdev_delay_update_latency", 00:12:21.358 "bdev_zone_block_delete", 00:12:21.358 "bdev_zone_block_create", 00:12:21.358 "blobfs_create", 00:12:21.358 "blobfs_detect", 00:12:21.358 "blobfs_set_cache_size", 00:12:21.358 "bdev_xnvme_delete", 00:12:21.358 "bdev_xnvme_create", 00:12:21.358 "bdev_aio_delete", 00:12:21.358 "bdev_aio_rescan", 00:12:21.358 "bdev_aio_create", 00:12:21.358 "bdev_ftl_set_property", 00:12:21.358 "bdev_ftl_get_properties", 00:12:21.358 "bdev_ftl_get_stats", 00:12:21.359 "bdev_ftl_unmap", 00:12:21.359 "bdev_ftl_unload", 00:12:21.359 "bdev_ftl_delete", 00:12:21.359 "bdev_ftl_load", 00:12:21.359 "bdev_ftl_create", 00:12:21.359 "bdev_virtio_attach_controller", 00:12:21.359 "bdev_virtio_scsi_get_devices", 00:12:21.359 "bdev_virtio_detach_controller", 00:12:21.359 "bdev_virtio_blk_set_hotplug", 00:12:21.359 "bdev_iscsi_delete", 00:12:21.359 "bdev_iscsi_create", 00:12:21.359 "bdev_iscsi_set_options", 00:12:21.359 "accel_error_inject_error", 00:12:21.359 "ioat_scan_accel_module", 00:12:21.359 "dsa_scan_accel_module", 00:12:21.359 "iaa_scan_accel_module", 00:12:21.359 "keyring_file_remove_key", 00:12:21.359 "keyring_file_add_key", 00:12:21.359 "keyring_linux_set_options", 00:12:21.359 "fsdev_aio_delete", 00:12:21.359 "fsdev_aio_create", 00:12:21.359 "iscsi_get_histogram", 00:12:21.359 "iscsi_enable_histogram", 00:12:21.359 "iscsi_set_options", 00:12:21.359 "iscsi_get_auth_groups", 00:12:21.359 "iscsi_auth_group_remove_secret", 00:12:21.359 "iscsi_auth_group_add_secret", 00:12:21.359 "iscsi_delete_auth_group", 00:12:21.359 "iscsi_create_auth_group", 00:12:21.359 "iscsi_set_discovery_auth", 00:12:21.359 "iscsi_get_options", 00:12:21.359 "iscsi_target_node_request_logout", 00:12:21.359 "iscsi_target_node_set_redirect", 00:12:21.359 "iscsi_target_node_set_auth", 00:12:21.359 "iscsi_target_node_add_lun", 00:12:21.359 "iscsi_get_stats", 00:12:21.359 "iscsi_get_connections", 00:12:21.359 "iscsi_portal_group_set_auth", 00:12:21.359 "iscsi_start_portal_group", 00:12:21.359 "iscsi_delete_portal_group", 00:12:21.359 "iscsi_create_portal_group", 00:12:21.359 "iscsi_get_portal_groups", 00:12:21.359 "iscsi_delete_target_node", 00:12:21.359 "iscsi_target_node_remove_pg_ig_maps", 00:12:21.359 "iscsi_target_node_add_pg_ig_maps", 00:12:21.359 "iscsi_create_target_node", 00:12:21.359 "iscsi_get_target_nodes", 00:12:21.359 "iscsi_delete_initiator_group", 00:12:21.359 "iscsi_initiator_group_remove_initiators", 00:12:21.359 "iscsi_initiator_group_add_initiators", 00:12:21.359 "iscsi_create_initiator_group", 00:12:21.359 "iscsi_get_initiator_groups", 00:12:21.359 "nvmf_set_crdt", 00:12:21.359 "nvmf_set_config", 00:12:21.359 "nvmf_set_max_subsystems", 00:12:21.359 "nvmf_stop_mdns_prr", 00:12:21.359 "nvmf_publish_mdns_prr", 00:12:21.359 "nvmf_subsystem_get_listeners", 00:12:21.359 "nvmf_subsystem_get_qpairs", 00:12:21.359 "nvmf_subsystem_get_controllers", 00:12:21.359 "nvmf_get_stats", 00:12:21.359 "nvmf_get_transports", 00:12:21.359 "nvmf_create_transport", 00:12:21.359 "nvmf_get_targets", 00:12:21.359 "nvmf_delete_target", 00:12:21.359 "nvmf_create_target", 00:12:21.359 "nvmf_subsystem_allow_any_host", 00:12:21.359 "nvmf_subsystem_set_keys", 00:12:21.359 "nvmf_subsystem_remove_host", 00:12:21.359 "nvmf_subsystem_add_host", 00:12:21.359 "nvmf_ns_remove_host", 00:12:21.359 "nvmf_ns_add_host", 00:12:21.359 "nvmf_subsystem_remove_ns", 00:12:21.359 "nvmf_subsystem_set_ns_ana_group", 00:12:21.359 "nvmf_subsystem_add_ns", 00:12:21.359 "nvmf_subsystem_listener_set_ana_state", 00:12:21.359 "nvmf_discovery_get_referrals", 00:12:21.359 "nvmf_discovery_remove_referral", 00:12:21.359 "nvmf_discovery_add_referral", 00:12:21.359 "nvmf_subsystem_remove_listener", 00:12:21.359 "nvmf_subsystem_add_listener", 00:12:21.359 "nvmf_delete_subsystem", 00:12:21.359 "nvmf_create_subsystem", 00:12:21.359 "nvmf_get_subsystems", 00:12:21.359 "env_dpdk_get_mem_stats", 00:12:21.359 "nbd_get_disks", 00:12:21.359 "nbd_stop_disk", 00:12:21.359 "nbd_start_disk", 00:12:21.359 "ublk_recover_disk", 00:12:21.359 "ublk_get_disks", 00:12:21.359 "ublk_stop_disk", 00:12:21.359 "ublk_start_disk", 00:12:21.359 "ublk_destroy_target", 00:12:21.359 "ublk_create_target", 00:12:21.359 "virtio_blk_create_transport", 00:12:21.359 "virtio_blk_get_transports", 00:12:21.359 "vhost_controller_set_coalescing", 00:12:21.359 "vhost_get_controllers", 00:12:21.359 "vhost_delete_controller", 00:12:21.359 "vhost_create_blk_controller", 00:12:21.359 "vhost_scsi_controller_remove_target", 00:12:21.359 "vhost_scsi_controller_add_target", 00:12:21.359 "vhost_start_scsi_controller", 00:12:21.359 "vhost_create_scsi_controller", 00:12:21.359 "thread_set_cpumask", 00:12:21.359 "scheduler_set_options", 00:12:21.359 "framework_get_governor", 00:12:21.359 "framework_get_scheduler", 00:12:21.359 "framework_set_scheduler", 00:12:21.359 "framework_get_reactors", 00:12:21.359 "thread_get_io_channels", 00:12:21.359 "thread_get_pollers", 00:12:21.359 "thread_get_stats", 00:12:21.359 "framework_monitor_context_switch", 00:12:21.359 "spdk_kill_instance", 00:12:21.359 "log_enable_timestamps", 00:12:21.359 "log_get_flags", 00:12:21.359 "log_clear_flag", 00:12:21.359 "log_set_flag", 00:12:21.359 "log_get_level", 00:12:21.359 "log_set_level", 00:12:21.359 "log_get_print_level", 00:12:21.359 "log_set_print_level", 00:12:21.359 "framework_enable_cpumask_locks", 00:12:21.359 "framework_disable_cpumask_locks", 00:12:21.359 "framework_wait_init", 00:12:21.359 "framework_start_init", 00:12:21.359 "scsi_get_devices", 00:12:21.359 "bdev_get_histogram", 00:12:21.359 "bdev_enable_histogram", 00:12:21.359 "bdev_set_qos_limit", 00:12:21.359 "bdev_set_qd_sampling_period", 00:12:21.359 "bdev_get_bdevs", 00:12:21.359 "bdev_reset_iostat", 00:12:21.359 "bdev_get_iostat", 00:12:21.359 "bdev_examine", 00:12:21.359 "bdev_wait_for_examine", 00:12:21.359 "bdev_set_options", 00:12:21.359 "accel_get_stats", 00:12:21.359 "accel_set_options", 00:12:21.359 "accel_set_driver", 00:12:21.359 "accel_crypto_key_destroy", 00:12:21.359 "accel_crypto_keys_get", 00:12:21.359 "accel_crypto_key_create", 00:12:21.359 "accel_assign_opc", 00:12:21.359 "accel_get_module_info", 00:12:21.359 "accel_get_opc_assignments", 00:12:21.359 "vmd_rescan", 00:12:21.359 "vmd_remove_device", 00:12:21.359 "vmd_enable", 00:12:21.359 "sock_get_default_impl", 00:12:21.359 "sock_set_default_impl", 00:12:21.359 "sock_impl_set_options", 00:12:21.359 "sock_impl_get_options", 00:12:21.359 "iobuf_get_stats", 00:12:21.359 "iobuf_set_options", 00:12:21.359 "keyring_get_keys", 00:12:21.359 "framework_get_pci_devices", 00:12:21.359 "framework_get_config", 00:12:21.359 "framework_get_subsystems", 00:12:21.359 "fsdev_set_opts", 00:12:21.359 "fsdev_get_opts", 00:12:21.359 "trace_get_info", 00:12:21.359 "trace_get_tpoint_group_mask", 00:12:21.359 "trace_disable_tpoint_group", 00:12:21.359 "trace_enable_tpoint_group", 00:12:21.359 "trace_clear_tpoint_mask", 00:12:21.359 "trace_set_tpoint_mask", 00:12:21.359 "notify_get_notifications", 00:12:21.359 "notify_get_types", 00:12:21.359 "spdk_get_version", 00:12:21.359 "rpc_get_methods" 00:12:21.359 ] 00:12:21.359 22:53:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.359 22:53:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:21.359 22:53:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59000 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59000 ']' 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59000 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59000 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.359 killing process with pid 59000 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59000' 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59000 00:12:21.359 22:53:48 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59000 00:12:24.646 00:12:24.646 real 0m4.656s 00:12:24.646 user 0m8.136s 00:12:24.646 sys 0m0.802s 00:12:24.646 22:53:51 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.646 22:53:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.646 ************************************ 00:12:24.646 END TEST spdkcli_tcp 00:12:24.646 ************************************ 00:12:24.646 22:53:51 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:24.646 22:53:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:24.646 22:53:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.646 22:53:51 -- common/autotest_common.sh@10 -- # set +x 00:12:24.646 ************************************ 00:12:24.646 START TEST dpdk_mem_utility 00:12:24.646 ************************************ 00:12:24.646 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:24.646 * Looking for test storage... 00:12:24.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:24.646 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.646 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.646 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.646 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.646 22:53:51 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:12:24.646 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.646 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.646 --rc genhtml_branch_coverage=1 00:12:24.646 --rc genhtml_function_coverage=1 00:12:24.646 --rc genhtml_legend=1 00:12:24.646 --rc geninfo_all_blocks=1 00:12:24.646 --rc geninfo_unexecuted_blocks=1 00:12:24.647 00:12:24.647 ' 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.647 --rc genhtml_branch_coverage=1 00:12:24.647 --rc genhtml_function_coverage=1 00:12:24.647 --rc genhtml_legend=1 00:12:24.647 --rc geninfo_all_blocks=1 00:12:24.647 --rc geninfo_unexecuted_blocks=1 00:12:24.647 00:12:24.647 ' 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.647 --rc genhtml_branch_coverage=1 00:12:24.647 --rc genhtml_function_coverage=1 00:12:24.647 --rc genhtml_legend=1 00:12:24.647 --rc geninfo_all_blocks=1 00:12:24.647 --rc geninfo_unexecuted_blocks=1 00:12:24.647 00:12:24.647 ' 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.647 --rc genhtml_branch_coverage=1 00:12:24.647 --rc genhtml_function_coverage=1 00:12:24.647 --rc genhtml_legend=1 00:12:24.647 --rc geninfo_all_blocks=1 00:12:24.647 --rc geninfo_unexecuted_blocks=1 00:12:24.647 00:12:24.647 ' 00:12:24.647 22:53:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:24.647 22:53:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59128 00:12:24.647 22:53:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59128 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59128 ']' 00:12:24.647 22:53:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.647 22:53:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:24.647 [2024-12-09 22:53:51.681225] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:24.647 [2024-12-09 22:53:51.681388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59128 ] 00:12:24.647 [2024-12-09 22:53:51.867286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.936 [2024-12-09 22:53:52.019986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.872 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.872 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:12:25.872 22:53:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:25.872 22:53:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:25.872 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.872 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:25.872 { 00:12:25.872 "filename": "/tmp/spdk_mem_dump.txt" 00:12:25.872 } 00:12:25.872 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.872 22:53:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:25.872 DPDK memory size 824.000000 MiB in 1 heap(s) 00:12:25.872 1 heaps totaling size 824.000000 MiB 00:12:25.872 size: 824.000000 MiB heap id: 0 00:12:25.872 end heaps---------- 00:12:25.872 9 mempools totaling size 603.782043 MiB 00:12:25.873 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:25.873 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:25.873 size: 100.555481 MiB name: bdev_io_59128 00:12:25.873 size: 50.003479 MiB name: msgpool_59128 00:12:25.873 size: 36.509338 MiB name: fsdev_io_59128 00:12:25.873 size: 21.763794 MiB name: PDU_Pool 00:12:25.873 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:25.873 size: 4.133484 MiB name: evtpool_59128 00:12:25.873 size: 0.026123 MiB name: Session_Pool 00:12:25.873 end mempools------- 00:12:25.873 6 memzones totaling size 4.142822 MiB 00:12:25.873 size: 1.000366 MiB name: RG_ring_0_59128 00:12:25.873 size: 1.000366 MiB name: RG_ring_1_59128 00:12:25.873 size: 1.000366 MiB name: RG_ring_4_59128 00:12:25.873 size: 1.000366 MiB name: RG_ring_5_59128 00:12:25.873 size: 0.125366 MiB name: RG_ring_2_59128 00:12:25.873 size: 0.015991 MiB name: RG_ring_3_59128 00:12:25.873 end memzones------- 00:12:25.873 22:53:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:25.873 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:12:25.873 list of free elements. size: 16.779419 MiB 00:12:25.873 element at address: 0x200006400000 with size: 1.995972 MiB 00:12:25.873 element at address: 0x20000a600000 with size: 1.995972 MiB 00:12:25.873 element at address: 0x200003e00000 with size: 1.991028 MiB 00:12:25.873 element at address: 0x200019500040 with size: 0.999939 MiB 00:12:25.873 element at address: 0x200019900040 with size: 0.999939 MiB 00:12:25.873 element at address: 0x200019a00000 with size: 0.999084 MiB 00:12:25.873 element at address: 0x200032600000 with size: 0.994324 MiB 00:12:25.873 element at address: 0x200000400000 with size: 0.992004 MiB 00:12:25.873 element at address: 0x200019200000 with size: 0.959656 MiB 00:12:25.873 element at address: 0x200019d00040 with size: 0.936401 MiB 00:12:25.873 element at address: 0x200000200000 with size: 0.716980 MiB 00:12:25.873 element at address: 0x20001b400000 with size: 0.560486 MiB 00:12:25.873 element at address: 0x200000c00000 with size: 0.489197 MiB 00:12:25.873 element at address: 0x200019600000 with size: 0.488220 MiB 00:12:25.873 element at address: 0x200019e00000 with size: 0.485413 MiB 00:12:25.873 element at address: 0x200012c00000 with size: 0.433472 MiB 00:12:25.873 element at address: 0x200028800000 with size: 0.390442 MiB 00:12:25.873 element at address: 0x200000800000 with size: 0.350891 MiB 00:12:25.873 list of standard malloc elements. size: 199.289673 MiB 00:12:25.873 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:12:25.873 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:12:25.873 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:12:25.873 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:12:25.873 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:12:25.873 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:12:25.873 element at address: 0x200019deff40 with size: 0.062683 MiB 00:12:25.873 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:12:25.873 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:12:25.873 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:12:25.873 element at address: 0x200012bff040 with size: 0.000305 MiB 00:12:25.873 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:12:25.873 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200000cff000 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff180 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff280 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff380 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff480 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff580 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff680 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff780 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff880 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bff980 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:12:25.873 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200019affc40 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200028863f40 with size: 0.000244 MiB 00:12:25.874 element at address: 0x200028864040 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886af80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b080 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b180 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b280 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b380 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b480 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b580 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b680 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b780 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b880 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886b980 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886be80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c080 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c180 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c280 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c380 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c480 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c580 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c680 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c780 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c880 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886c980 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:12:25.874 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d080 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d180 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d280 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d380 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d480 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d580 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d680 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d780 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d880 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886d980 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886da80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886db80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886de80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886df80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e080 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e180 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e280 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e380 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e480 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e580 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e680 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e780 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e880 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886e980 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f080 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f180 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f280 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f380 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f480 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f580 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f680 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f780 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f880 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886f980 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:12:25.875 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:12:25.875 list of memzone associated elements. size: 607.930908 MiB 00:12:25.875 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:12:25.875 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:25.875 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:12:25.875 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:25.875 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:12:25.875 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59128_0 00:12:25.875 element at address: 0x200000dff340 with size: 48.003113 MiB 00:12:25.875 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59128_0 00:12:25.875 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:12:25.875 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59128_0 00:12:25.875 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:12:25.875 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:25.875 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:12:25.875 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:25.875 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:12:25.875 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59128_0 00:12:25.875 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:12:25.875 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59128 00:12:25.875 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:12:25.875 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59128 00:12:25.875 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:12:25.875 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:25.875 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:12:25.875 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:25.875 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:12:25.875 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:25.875 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:12:25.875 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:25.875 element at address: 0x200000cff100 with size: 1.000549 MiB 00:12:25.875 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59128 00:12:25.875 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:12:25.875 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59128 00:12:25.875 element at address: 0x200019affd40 with size: 1.000549 MiB 00:12:25.875 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59128 00:12:25.875 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:12:25.875 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59128 00:12:25.875 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:12:25.875 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59128 00:12:25.875 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:12:25.875 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59128 00:12:25.875 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:12:25.875 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:25.875 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:12:25.875 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:25.875 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:12:25.875 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:25.875 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:12:25.875 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59128 00:12:25.875 element at address: 0x20000085df80 with size: 0.125549 MiB 00:12:25.875 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59128 00:12:25.875 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:12:25.875 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:25.875 element at address: 0x200028864140 with size: 0.023804 MiB 00:12:25.875 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:25.875 element at address: 0x200000859d40 with size: 0.016174 MiB 00:12:25.875 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59128 00:12:25.875 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:12:25.875 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:25.875 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:12:25.875 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59128 00:12:25.875 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:12:25.875 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59128 00:12:25.875 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:12:25.875 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59128 00:12:25.875 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:12:25.875 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:25.875 22:53:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:25.875 22:53:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59128 00:12:25.875 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59128 ']' 00:12:25.875 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59128 00:12:25.875 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:12:25.875 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.875 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59128 00:12:26.134 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.134 killing process with pid 59128 00:12:26.134 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.134 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59128' 00:12:26.134 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59128 00:12:26.134 22:53:53 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59128 00:12:28.668 00:12:28.668 real 0m4.500s 00:12:28.668 user 0m4.240s 00:12:28.668 sys 0m0.787s 00:12:28.668 22:53:55 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.668 22:53:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:28.668 ************************************ 00:12:28.668 END TEST dpdk_mem_utility 00:12:28.668 ************************************ 00:12:28.668 22:53:55 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:28.668 22:53:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.668 22:53:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.668 22:53:55 -- common/autotest_common.sh@10 -- # set +x 00:12:28.668 ************************************ 00:12:28.668 START TEST event 00:12:28.668 ************************************ 00:12:28.668 22:53:55 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:28.927 * Looking for test storage... 00:12:28.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:28.927 22:53:56 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:28.927 22:53:56 event -- common/autotest_common.sh@1711 -- # lcov --version 00:12:28.927 22:53:56 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:28.927 22:53:56 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:28.927 22:53:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.927 22:53:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.927 22:53:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.927 22:53:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.927 22:53:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.927 22:53:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.927 22:53:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.927 22:53:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.927 22:53:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.927 22:53:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.927 22:53:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.927 22:53:56 event -- scripts/common.sh@344 -- # case "$op" in 00:12:28.927 22:53:56 event -- scripts/common.sh@345 -- # : 1 00:12:28.927 22:53:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.928 22:53:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.928 22:53:56 event -- scripts/common.sh@365 -- # decimal 1 00:12:28.928 22:53:56 event -- scripts/common.sh@353 -- # local d=1 00:12:28.928 22:53:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.928 22:53:56 event -- scripts/common.sh@355 -- # echo 1 00:12:28.928 22:53:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.928 22:53:56 event -- scripts/common.sh@366 -- # decimal 2 00:12:28.928 22:53:56 event -- scripts/common.sh@353 -- # local d=2 00:12:28.928 22:53:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.928 22:53:56 event -- scripts/common.sh@355 -- # echo 2 00:12:28.928 22:53:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.928 22:53:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.928 22:53:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.928 22:53:56 event -- scripts/common.sh@368 -- # return 0 00:12:28.928 22:53:56 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.928 22:53:56 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:28.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.928 --rc genhtml_branch_coverage=1 00:12:28.928 --rc genhtml_function_coverage=1 00:12:28.928 --rc genhtml_legend=1 00:12:28.928 --rc geninfo_all_blocks=1 00:12:28.928 --rc geninfo_unexecuted_blocks=1 00:12:28.928 00:12:28.928 ' 00:12:28.928 22:53:56 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:28.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.928 --rc genhtml_branch_coverage=1 00:12:28.928 --rc genhtml_function_coverage=1 00:12:28.928 --rc genhtml_legend=1 00:12:28.928 --rc geninfo_all_blocks=1 00:12:28.928 --rc geninfo_unexecuted_blocks=1 00:12:28.928 00:12:28.928 ' 00:12:28.928 22:53:56 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:28.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.928 --rc genhtml_branch_coverage=1 00:12:28.928 --rc genhtml_function_coverage=1 00:12:28.928 --rc genhtml_legend=1 00:12:28.928 --rc geninfo_all_blocks=1 00:12:28.928 --rc geninfo_unexecuted_blocks=1 00:12:28.928 00:12:28.928 ' 00:12:28.928 22:53:56 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:28.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.928 --rc genhtml_branch_coverage=1 00:12:28.928 --rc genhtml_function_coverage=1 00:12:28.928 --rc genhtml_legend=1 00:12:28.928 --rc geninfo_all_blocks=1 00:12:28.928 --rc geninfo_unexecuted_blocks=1 00:12:28.928 00:12:28.928 ' 00:12:28.928 22:53:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:28.928 22:53:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:28.928 22:53:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:28.928 22:53:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:28.928 22:53:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.928 22:53:56 event -- common/autotest_common.sh@10 -- # set +x 00:12:28.928 ************************************ 00:12:28.928 START TEST event_perf 00:12:28.928 ************************************ 00:12:28.928 22:53:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:28.928 Running I/O for 1 seconds...[2024-12-09 22:53:56.211200] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:28.928 [2024-12-09 22:53:56.211334] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59236 ] 00:12:29.187 [2024-12-09 22:53:56.396664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.447 [2024-12-09 22:53:56.557032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.447 [2024-12-09 22:53:56.557164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.447 [2024-12-09 22:53:56.557263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.447 [2024-12-09 22:53:56.557289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.829 Running I/O for 1 seconds... 00:12:30.829 lcore 0: 206340 00:12:30.829 lcore 1: 206340 00:12:30.829 lcore 2: 206340 00:12:30.829 lcore 3: 206340 00:12:30.829 done. 00:12:30.829 00:12:30.829 real 0m1.661s 00:12:30.829 user 0m4.407s 00:12:30.829 sys 0m0.130s 00:12:30.829 22:53:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.829 22:53:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:30.829 ************************************ 00:12:30.829 END TEST event_perf 00:12:30.829 ************************************ 00:12:30.829 22:53:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:30.829 22:53:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:30.829 22:53:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.829 22:53:57 event -- common/autotest_common.sh@10 -- # set +x 00:12:30.829 ************************************ 00:12:30.829 START TEST event_reactor 00:12:30.829 ************************************ 00:12:30.829 22:53:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:30.829 [2024-12-09 22:53:57.947899] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:30.829 [2024-12-09 22:53:57.948039] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59281 ] 00:12:30.829 [2024-12-09 22:53:58.129162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.088 [2024-12-09 22:53:58.277136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.475 test_start 00:12:32.475 oneshot 00:12:32.475 tick 100 00:12:32.475 tick 100 00:12:32.475 tick 250 00:12:32.475 tick 100 00:12:32.475 tick 100 00:12:32.475 tick 100 00:12:32.475 tick 250 00:12:32.475 tick 500 00:12:32.475 tick 100 00:12:32.475 tick 100 00:12:32.475 tick 250 00:12:32.475 tick 100 00:12:32.475 tick 100 00:12:32.475 test_end 00:12:32.475 00:12:32.475 real 0m1.628s 00:12:32.475 user 0m1.406s 00:12:32.475 sys 0m0.112s 00:12:32.475 22:53:59 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.475 22:53:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:32.475 ************************************ 00:12:32.475 END TEST event_reactor 00:12:32.475 ************************************ 00:12:32.475 22:53:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:32.475 22:53:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.475 22:53:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.475 22:53:59 event -- common/autotest_common.sh@10 -- # set +x 00:12:32.475 ************************************ 00:12:32.475 START TEST event_reactor_perf 00:12:32.475 ************************************ 00:12:32.475 22:53:59 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:32.475 [2024-12-09 22:53:59.659406] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:32.475 [2024-12-09 22:53:59.659611] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:12:32.734 [2024-12-09 22:53:59.843275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.734 [2024-12-09 22:53:59.998195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.113 test_start 00:12:34.113 test_end 00:12:34.113 Performance: 368000 events per second 00:12:34.113 00:12:34.113 real 0m1.641s 00:12:34.113 user 0m1.416s 00:12:34.113 sys 0m0.114s 00:12:34.113 22:54:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.113 22:54:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:34.113 ************************************ 00:12:34.113 END TEST event_reactor_perf 00:12:34.113 ************************************ 00:12:34.113 22:54:01 event -- event/event.sh@49 -- # uname -s 00:12:34.113 22:54:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:34.113 22:54:01 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:34.113 22:54:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:34.113 22:54:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.113 22:54:01 event -- common/autotest_common.sh@10 -- # set +x 00:12:34.113 ************************************ 00:12:34.113 START TEST event_scheduler 00:12:34.113 ************************************ 00:12:34.113 22:54:01 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:34.373 * Looking for test storage... 00:12:34.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.373 22:54:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:34.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.373 --rc genhtml_branch_coverage=1 00:12:34.373 --rc genhtml_function_coverage=1 00:12:34.373 --rc genhtml_legend=1 00:12:34.373 --rc geninfo_all_blocks=1 00:12:34.373 --rc geninfo_unexecuted_blocks=1 00:12:34.373 00:12:34.373 ' 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:34.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.373 --rc genhtml_branch_coverage=1 00:12:34.373 --rc genhtml_function_coverage=1 00:12:34.373 --rc genhtml_legend=1 00:12:34.373 --rc geninfo_all_blocks=1 00:12:34.373 --rc geninfo_unexecuted_blocks=1 00:12:34.373 00:12:34.373 ' 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:34.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.373 --rc genhtml_branch_coverage=1 00:12:34.373 --rc genhtml_function_coverage=1 00:12:34.373 --rc genhtml_legend=1 00:12:34.373 --rc geninfo_all_blocks=1 00:12:34.373 --rc geninfo_unexecuted_blocks=1 00:12:34.373 00:12:34.373 ' 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:34.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.373 --rc genhtml_branch_coverage=1 00:12:34.373 --rc genhtml_function_coverage=1 00:12:34.373 --rc genhtml_legend=1 00:12:34.373 --rc geninfo_all_blocks=1 00:12:34.373 --rc geninfo_unexecuted_blocks=1 00:12:34.373 00:12:34.373 ' 00:12:34.373 22:54:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:34.373 22:54:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59395 00:12:34.373 22:54:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:34.373 22:54:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:34.373 22:54:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59395 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59395 ']' 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.373 22:54:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:34.373 [2024-12-09 22:54:01.702687] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:34.373 [2024-12-09 22:54:01.702860] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59395 ] 00:12:34.631 [2024-12-09 22:54:01.888421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.936 [2024-12-09 22:54:02.043746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.936 [2024-12-09 22:54:02.043929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.936 [2024-12-09 22:54:02.044072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.936 [2024-12-09 22:54:02.044106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.194 22:54:02 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.194 22:54:02 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:12:35.194 22:54:02 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:35.194 22:54:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.194 22:54:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:35.194 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:35.194 POWER: Cannot set governor of lcore 0 to userspace 00:12:35.194 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:35.194 POWER: Cannot set governor of lcore 0 to performance 00:12:35.194 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:35.194 POWER: Cannot set governor of lcore 0 to userspace 00:12:35.194 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:35.194 POWER: Cannot set governor of lcore 0 to userspace 00:12:35.194 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:12:35.194 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:35.194 POWER: Unable to set Power Management Environment for lcore 0 00:12:35.194 [2024-12-09 22:54:02.525131] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:12:35.194 [2024-12-09 22:54:02.525161] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:12:35.195 [2024-12-09 22:54:02.525174] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:12:35.195 [2024-12-09 22:54:02.525196] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:12:35.195 [2024-12-09 22:54:02.525207] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:12:35.195 [2024-12-09 22:54:02.525219] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:12:35.195 22:54:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.195 22:54:02 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:35.195 22:54:02 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.455 22:54:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 [2024-12-09 22:54:02.923501] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:35.714 22:54:02 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:02 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:35.714 22:54:02 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:35.714 22:54:02 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 ************************************ 00:12:35.714 START TEST scheduler_create_thread 00:12:35.714 ************************************ 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 2 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 3 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 4 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 5 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 6 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 7 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 8 00:12:35.714 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:35.714 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.714 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.714 9 00:12:35.714 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.714 22:54:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:35.714 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 10 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.715 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:35.975 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.975 22:54:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:35.975 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.975 22:54:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:37.369 22:54:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.369 22:54:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:37.369 22:54:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:37.369 22:54:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.369 22:54:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:38.304 ************************************ 00:12:38.304 22:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.304 00:12:38.304 real 0m2.618s 00:12:38.304 user 0m0.023s 00:12:38.304 sys 0m0.013s 00:12:38.304 22:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.304 22:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:38.304 END TEST scheduler_create_thread 00:12:38.304 ************************************ 00:12:38.304 22:54:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:38.304 22:54:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59395 00:12:38.304 22:54:05 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59395 ']' 00:12:38.304 22:54:05 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59395 00:12:38.304 22:54:05 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:12:38.304 22:54:05 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.304 22:54:05 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59395 00:12:38.562 killing process with pid 59395 00:12:38.562 22:54:05 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:38.562 22:54:05 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:38.562 22:54:05 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59395' 00:12:38.562 22:54:05 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59395 00:12:38.562 22:54:05 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59395 00:12:38.821 [2024-12-09 22:54:05.937866] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:40.276 00:12:40.276 real 0m5.911s 00:12:40.276 user 0m12.089s 00:12:40.276 sys 0m0.669s 00:12:40.276 ************************************ 00:12:40.276 END TEST event_scheduler 00:12:40.276 22:54:07 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.276 22:54:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:40.276 ************************************ 00:12:40.276 22:54:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:40.276 22:54:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:40.276 22:54:07 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.276 22:54:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.276 22:54:07 event -- common/autotest_common.sh@10 -- # set +x 00:12:40.276 ************************************ 00:12:40.276 START TEST app_repeat 00:12:40.276 ************************************ 00:12:40.276 22:54:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:40.276 Process app_repeat pid: 59507 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59507 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59507' 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:40.276 spdk_app_start Round 0 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:40.276 22:54:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59507 /var/tmp/spdk-nbd.sock 00:12:40.276 22:54:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59507 ']' 00:12:40.276 22:54:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:40.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:40.276 22:54:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.276 22:54:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:40.276 22:54:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.276 22:54:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:40.276 [2024-12-09 22:54:07.379186] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:12:40.276 [2024-12-09 22:54:07.379323] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59507 ] 00:12:40.276 [2024-12-09 22:54:07.562697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:40.535 [2024-12-09 22:54:07.706006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.535 [2024-12-09 22:54:07.706039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.101 22:54:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.101 22:54:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:41.102 22:54:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:41.360 Malloc0 00:12:41.360 22:54:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:41.618 Malloc1 00:12:41.618 22:54:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.618 22:54:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:41.878 /dev/nbd0 00:12:41.878 22:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.878 22:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:41.878 1+0 records in 00:12:41.878 1+0 records out 00:12:41.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463481 s, 8.8 MB/s 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.878 22:54:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:41.878 22:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.878 22:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.878 22:54:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:42.137 /dev/nbd1 00:12:42.137 22:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.137 22:54:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:42.137 1+0 records in 00:12:42.137 1+0 records out 00:12:42.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044312 s, 9.2 MB/s 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.137 22:54:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:42.137 22:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.137 22:54:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.137 22:54:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.137 22:54:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.137 22:54:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.396 22:54:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:42.396 { 00:12:42.396 "nbd_device": "/dev/nbd0", 00:12:42.396 "bdev_name": "Malloc0" 00:12:42.396 }, 00:12:42.396 { 00:12:42.396 "nbd_device": "/dev/nbd1", 00:12:42.396 "bdev_name": "Malloc1" 00:12:42.396 } 00:12:42.396 ]' 00:12:42.396 22:54:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:42.396 { 00:12:42.396 "nbd_device": "/dev/nbd0", 00:12:42.396 "bdev_name": "Malloc0" 00:12:42.396 }, 00:12:42.396 { 00:12:42.396 "nbd_device": "/dev/nbd1", 00:12:42.396 "bdev_name": "Malloc1" 00:12:42.396 } 00:12:42.396 ]' 00:12:42.396 22:54:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:42.655 /dev/nbd1' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:42.655 /dev/nbd1' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:42.655 256+0 records in 00:12:42.655 256+0 records out 00:12:42.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138634 s, 75.6 MB/s 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:42.655 256+0 records in 00:12:42.655 256+0 records out 00:12:42.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296177 s, 35.4 MB/s 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:42.655 256+0 records in 00:12:42.655 256+0 records out 00:12:42.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333457 s, 31.4 MB/s 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.655 22:54:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.914 22:54:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.172 22:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:43.431 22:54:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:43.431 22:54:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:43.998 22:54:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:45.374 [2024-12-09 22:54:12.429957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.374 [2024-12-09 22:54:12.581019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.374 [2024-12-09 22:54:12.581019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.633 [2024-12-09 22:54:12.825196] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:45.633 [2024-12-09 22:54:12.825327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:47.010 22:54:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:47.010 spdk_app_start Round 1 00:12:47.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:47.010 22:54:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:47.010 22:54:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59507 /var/tmp/spdk-nbd.sock 00:12:47.010 22:54:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59507 ']' 00:12:47.010 22:54:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:47.010 22:54:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.010 22:54:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:47.010 22:54:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.010 22:54:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:47.268 22:54:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.268 22:54:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:47.268 22:54:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:47.528 Malloc0 00:12:47.528 22:54:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:47.849 Malloc1 00:12:47.849 22:54:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:47.849 22:54:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:48.123 /dev/nbd0 00:12:48.123 22:54:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:48.123 22:54:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:48.123 1+0 records in 00:12:48.123 1+0 records out 00:12:48.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441173 s, 9.3 MB/s 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:48.123 22:54:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:48.123 22:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.123 22:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.123 22:54:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:48.382 /dev/nbd1 00:12:48.382 22:54:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:48.382 22:54:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:48.382 1+0 records in 00:12:48.382 1+0 records out 00:12:48.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464431 s, 8.8 MB/s 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:48.382 22:54:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:48.382 22:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.382 22:54:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:48.382 22:54:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:48.382 22:54:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.382 22:54:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:48.641 { 00:12:48.641 "nbd_device": "/dev/nbd0", 00:12:48.641 "bdev_name": "Malloc0" 00:12:48.641 }, 00:12:48.641 { 00:12:48.641 "nbd_device": "/dev/nbd1", 00:12:48.641 "bdev_name": "Malloc1" 00:12:48.641 } 00:12:48.641 ]' 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:48.641 { 00:12:48.641 "nbd_device": "/dev/nbd0", 00:12:48.641 "bdev_name": "Malloc0" 00:12:48.641 }, 00:12:48.641 { 00:12:48.641 "nbd_device": "/dev/nbd1", 00:12:48.641 "bdev_name": "Malloc1" 00:12:48.641 } 00:12:48.641 ]' 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:48.641 /dev/nbd1' 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:48.641 /dev/nbd1' 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:48.641 256+0 records in 00:12:48.641 256+0 records out 00:12:48.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126583 s, 82.8 MB/s 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:48.641 256+0 records in 00:12:48.641 256+0 records out 00:12:48.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320352 s, 32.7 MB/s 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.641 22:54:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:48.901 256+0 records in 00:12:48.901 256+0 records out 00:12:48.901 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0353007 s, 29.7 MB/s 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.901 22:54:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.160 22:54:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.419 22:54:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:49.679 22:54:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:49.679 22:54:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:50.246 22:54:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:51.625 [2024-12-09 22:54:18.606776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:51.625 [2024-12-09 22:54:18.760604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.625 [2024-12-09 22:54:18.760632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.906 [2024-12-09 22:54:19.000827] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:51.906 [2024-12-09 22:54:19.000937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:53.281 spdk_app_start Round 2 00:12:53.281 22:54:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:53.281 22:54:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:53.281 22:54:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59507 /var/tmp/spdk-nbd.sock 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59507 ']' 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:53.281 22:54:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:53.281 22:54:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:53.848 Malloc0 00:12:53.848 22:54:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:54.106 Malloc1 00:12:54.106 22:54:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:54.106 22:54:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:54.363 /dev/nbd0 00:12:54.363 22:54:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:54.363 22:54:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.363 22:54:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:54.363 1+0 records in 00:12:54.363 1+0 records out 00:12:54.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276129 s, 14.8 MB/s 00:12:54.364 22:54:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:54.364 22:54:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:54.364 22:54:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:54.364 22:54:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.364 22:54:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:54.364 22:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.364 22:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:54.364 22:54:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:54.622 /dev/nbd1 00:12:54.622 22:54:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:54.622 22:54:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:54.622 1+0 records in 00:12:54.622 1+0 records out 00:12:54.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398797 s, 10.3 MB/s 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:54.622 22:54:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:12:54.622 22:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:54.622 22:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:54.622 22:54:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:54.622 22:54:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.622 22:54:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:54.882 { 00:12:54.882 "nbd_device": "/dev/nbd0", 00:12:54.882 "bdev_name": "Malloc0" 00:12:54.882 }, 00:12:54.882 { 00:12:54.882 "nbd_device": "/dev/nbd1", 00:12:54.882 "bdev_name": "Malloc1" 00:12:54.882 } 00:12:54.882 ]' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:54.882 { 00:12:54.882 "nbd_device": "/dev/nbd0", 00:12:54.882 "bdev_name": "Malloc0" 00:12:54.882 }, 00:12:54.882 { 00:12:54.882 "nbd_device": "/dev/nbd1", 00:12:54.882 "bdev_name": "Malloc1" 00:12:54.882 } 00:12:54.882 ]' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:54.882 /dev/nbd1' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:54.882 /dev/nbd1' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:54.882 256+0 records in 00:12:54.882 256+0 records out 00:12:54.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552864 s, 190 MB/s 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:54.882 256+0 records in 00:12:54.882 256+0 records out 00:12:54.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032159 s, 32.6 MB/s 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:54.882 256+0 records in 00:12:54.882 256+0 records out 00:12:54.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0390098 s, 26.9 MB/s 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.882 22:54:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:55.140 22:54:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.140 22:54:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.141 22:54:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:55.705 22:54:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.706 22:54:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:55.965 22:54:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:55.965 22:54:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:56.533 22:54:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:57.910 [2024-12-09 22:54:24.959181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:57.910 [2024-12-09 22:54:25.116376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.910 [2024-12-09 22:54:25.116376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.168 [2024-12-09 22:54:25.358292] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:58.168 [2024-12-09 22:54:25.358412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:59.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:59.544 22:54:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59507 /var/tmp/spdk-nbd.sock 00:12:59.544 22:54:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59507 ']' 00:12:59.544 22:54:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:59.544 22:54:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.544 22:54:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:59.544 22:54:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.544 22:54:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:12:59.803 22:54:26 event.app_repeat -- event/event.sh@39 -- # killprocess 59507 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59507 ']' 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59507 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59507 00:12:59.803 killing process with pid 59507 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59507' 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59507 00:12:59.803 22:54:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59507 00:13:01.181 spdk_app_start is called in Round 0. 00:13:01.181 Shutdown signal received, stop current app iteration 00:13:01.181 Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 reinitialization... 00:13:01.181 spdk_app_start is called in Round 1. 00:13:01.181 Shutdown signal received, stop current app iteration 00:13:01.181 Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 reinitialization... 00:13:01.181 spdk_app_start is called in Round 2. 00:13:01.181 Shutdown signal received, stop current app iteration 00:13:01.181 Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 reinitialization... 00:13:01.181 spdk_app_start is called in Round 3. 00:13:01.181 Shutdown signal received, stop current app iteration 00:13:01.181 22:54:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:13:01.181 22:54:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:13:01.181 00:13:01.181 real 0m20.867s 00:13:01.181 user 0m44.168s 00:13:01.181 sys 0m3.858s 00:13:01.181 22:54:28 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.181 22:54:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:01.181 ************************************ 00:13:01.181 END TEST app_repeat 00:13:01.181 ************************************ 00:13:01.181 22:54:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:13:01.181 22:54:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:01.181 22:54:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.181 22:54:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.181 22:54:28 event -- common/autotest_common.sh@10 -- # set +x 00:13:01.181 ************************************ 00:13:01.181 START TEST cpu_locks 00:13:01.181 ************************************ 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:13:01.181 * Looking for test storage... 00:13:01.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.181 22:54:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:01.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.181 --rc genhtml_branch_coverage=1 00:13:01.181 --rc genhtml_function_coverage=1 00:13:01.181 --rc genhtml_legend=1 00:13:01.181 --rc geninfo_all_blocks=1 00:13:01.181 --rc geninfo_unexecuted_blocks=1 00:13:01.181 00:13:01.181 ' 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:01.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.181 --rc genhtml_branch_coverage=1 00:13:01.181 --rc genhtml_function_coverage=1 00:13:01.181 --rc genhtml_legend=1 00:13:01.181 --rc geninfo_all_blocks=1 00:13:01.181 --rc geninfo_unexecuted_blocks=1 00:13:01.181 00:13:01.181 ' 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:01.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.181 --rc genhtml_branch_coverage=1 00:13:01.181 --rc genhtml_function_coverage=1 00:13:01.181 --rc genhtml_legend=1 00:13:01.181 --rc geninfo_all_blocks=1 00:13:01.181 --rc geninfo_unexecuted_blocks=1 00:13:01.181 00:13:01.181 ' 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:01.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.181 --rc genhtml_branch_coverage=1 00:13:01.181 --rc genhtml_function_coverage=1 00:13:01.181 --rc genhtml_legend=1 00:13:01.181 --rc geninfo_all_blocks=1 00:13:01.181 --rc geninfo_unexecuted_blocks=1 00:13:01.181 00:13:01.181 ' 00:13:01.181 22:54:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:13:01.181 22:54:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:13:01.181 22:54:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:13:01.181 22:54:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.181 22:54:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:01.181 ************************************ 00:13:01.181 START TEST default_locks 00:13:01.181 ************************************ 00:13:01.181 22:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:13:01.181 22:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59970 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59970 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59970 ']' 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.182 22:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:01.440 [2024-12-09 22:54:28.624490] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:01.440 [2024-12-09 22:54:28.624641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59970 ] 00:13:01.699 [2024-12-09 22:54:28.810932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.699 [2024-12-09 22:54:28.960737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.075 22:54:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.075 22:54:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:13:03.075 22:54:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59970 00:13:03.075 22:54:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59970 00:13:03.075 22:54:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59970 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59970 ']' 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59970 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59970 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:03.334 killing process with pid 59970 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59970' 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59970 00:13:03.334 22:54:30 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59970 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59970 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59970 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59970 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59970 ']' 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:05.869 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59970) - No such process 00:13:05.869 ERROR: process (pid: 59970) is no longer running 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:05.869 00:13:05.869 real 0m4.626s 00:13:05.869 user 0m4.472s 00:13:05.869 sys 0m0.859s 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:05.869 ************************************ 00:13:05.869 END TEST default_locks 00:13:05.869 22:54:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:05.869 ************************************ 00:13:05.869 22:54:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:13:05.869 22:54:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:05.869 22:54:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:05.869 22:54:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:06.129 ************************************ 00:13:06.129 START TEST default_locks_via_rpc 00:13:06.129 ************************************ 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60051 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60051 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60051 ']' 00:13:06.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.129 22:54:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.129 [2024-12-09 22:54:33.319506] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:06.129 [2024-12-09 22:54:33.319636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60051 ] 00:13:06.388 [2024-12-09 22:54:33.504414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.388 [2024-12-09 22:54:33.662312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60051 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60051 00:13:07.766 22:54:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60051 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60051 ']' 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60051 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60051 00:13:08.025 killing process with pid 60051 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60051' 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60051 00:13:08.025 22:54:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60051 00:13:11.308 00:13:11.308 real 0m4.690s 00:13:11.308 user 0m4.508s 00:13:11.308 sys 0m0.844s 00:13:11.308 22:54:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.308 22:54:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 ************************************ 00:13:11.308 END TEST default_locks_via_rpc 00:13:11.308 ************************************ 00:13:11.308 22:54:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:11.308 22:54:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:11.308 22:54:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.308 22:54:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 ************************************ 00:13:11.308 START TEST non_locking_app_on_locked_coremask 00:13:11.308 ************************************ 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60136 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60136 /var/tmp/spdk.sock 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60136 ']' 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.308 22:54:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:11.308 [2024-12-09 22:54:38.082167] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:11.308 [2024-12-09 22:54:38.082543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:13:11.308 [2024-12-09 22:54:38.265160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.308 [2024-12-09 22:54:38.409141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60152 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60152 /var/tmp/spdk2.sock 00:13:12.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60152 ']' 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.244 22:54:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:12.525 [2024-12-09 22:54:39.585356] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:12.525 [2024-12-09 22:54:39.585748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60152 ] 00:13:12.525 [2024-12-09 22:54:39.782109] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:12.525 [2024-12-09 22:54:39.782179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.809 [2024-12-09 22:54:40.104747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.338 22:54:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.338 22:54:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:15.338 22:54:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60136 00:13:15.338 22:54:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60136 00:13:15.338 22:54:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60136 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60136 ']' 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60136 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60136 00:13:16.276 killing process with pid 60136 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60136' 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60136 00:13:16.276 22:54:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60136 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60152 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60152 ']' 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60152 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60152 00:13:21.546 killing process with pid 60152 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60152' 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60152 00:13:21.546 22:54:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60152 00:13:24.077 00:13:24.078 real 0m13.360s 00:13:24.078 user 0m13.546s 00:13:24.078 sys 0m1.852s 00:13:24.078 ************************************ 00:13:24.078 END TEST non_locking_app_on_locked_coremask 00:13:24.078 ************************************ 00:13:24.078 22:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.078 22:54:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:24.078 22:54:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:24.078 22:54:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.078 22:54:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.078 22:54:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:24.078 ************************************ 00:13:24.078 START TEST locking_app_on_unlocked_coremask 00:13:24.078 ************************************ 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60322 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60322 /var/tmp/spdk.sock 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60322 ']' 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.078 22:54:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:24.335 [2024-12-09 22:54:51.514718] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:24.335 [2024-12-09 22:54:51.514866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60322 ] 00:13:24.594 [2024-12-09 22:54:51.686973] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:24.594 [2024-12-09 22:54:51.687040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.594 [2024-12-09 22:54:51.830932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60338 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60338 /var/tmp/spdk2.sock 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60338 ']' 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:25.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.966 22:54:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:25.966 [2024-12-09 22:54:53.022944] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:25.966 [2024-12-09 22:54:53.023102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60338 ] 00:13:25.966 [2024-12-09 22:54:53.210183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.225 [2024-12-09 22:54:53.521953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.825 22:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.825 22:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:28.825 22:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60338 00:13:28.825 22:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60338 00:13:28.825 22:54:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60322 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60322 ']' 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60322 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60322 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.393 killing process with pid 60322 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60322' 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60322 00:13:29.393 22:54:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60322 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60338 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60338 ']' 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60338 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60338 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.713 killing process with pid 60338 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60338' 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60338 00:13:34.713 22:55:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60338 00:13:37.118 00:13:37.118 real 0m12.966s 00:13:37.118 user 0m13.220s 00:13:37.118 sys 0m1.721s 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:37.118 ************************************ 00:13:37.118 END TEST locking_app_on_unlocked_coremask 00:13:37.118 ************************************ 00:13:37.118 22:55:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:37.118 22:55:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:37.118 22:55:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.118 22:55:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:37.118 ************************************ 00:13:37.118 START TEST locking_app_on_locked_coremask 00:13:37.118 ************************************ 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60497 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60497 /var/tmp/spdk.sock 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60497 ']' 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.118 22:55:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:37.377 [2024-12-09 22:55:04.559238] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:37.378 [2024-12-09 22:55:04.559387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60497 ] 00:13:37.662 [2024-12-09 22:55:04.746460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.662 [2024-12-09 22:55:04.894576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60519 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60519 /var/tmp/spdk2.sock 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60519 /var/tmp/spdk2.sock 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60519 /var/tmp/spdk2.sock 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60519 ']' 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.633 22:55:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:38.891 [2024-12-09 22:55:06.057014] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:38.891 [2024-12-09 22:55:06.057170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60519 ] 00:13:39.150 [2024-12-09 22:55:06.246295] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60497 has claimed it. 00:13:39.150 [2024-12-09 22:55:06.246369] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:39.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60519) - No such process 00:13:39.406 ERROR: process (pid: 60519) is no longer running 00:13:39.406 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.406 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:13:39.406 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:13:39.406 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.406 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.406 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.407 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60497 00:13:39.407 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60497 00:13:39.407 22:55:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60497 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60497 ']' 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60497 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60497 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.971 killing process with pid 60497 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60497' 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60497 00:13:39.971 22:55:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60497 00:13:43.253 00:13:43.253 real 0m5.475s 00:13:43.253 user 0m5.523s 00:13:43.253 sys 0m1.017s 00:13:43.253 22:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.253 22:55:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:43.253 ************************************ 00:13:43.253 END TEST locking_app_on_locked_coremask 00:13:43.253 ************************************ 00:13:43.253 22:55:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:43.253 22:55:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.253 22:55:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.253 22:55:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:43.253 ************************************ 00:13:43.253 START TEST locking_overlapped_coremask 00:13:43.253 ************************************ 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60594 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60594 /var/tmp/spdk.sock 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60594 ']' 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.253 22:55:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:43.253 [2024-12-09 22:55:10.105788] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:43.253 [2024-12-09 22:55:10.106762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60594 ] 00:13:43.253 [2024-12-09 22:55:10.313512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.253 [2024-12-09 22:55:10.464649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.253 [2024-12-09 22:55:10.465519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.253 [2024-12-09 22:55:10.465535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60612 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60612 /var/tmp/spdk2.sock 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60612 /var/tmp/spdk2.sock 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60612 /var/tmp/spdk2.sock 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60612 ']' 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.192 22:55:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:44.450 [2024-12-09 22:55:11.618823] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:44.450 [2024-12-09 22:55:11.619444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60612 ] 00:13:44.710 [2024-12-09 22:55:11.806150] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60594 has claimed it. 00:13:44.710 [2024-12-09 22:55:11.806221] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:44.969 ERROR: process (pid: 60612) is no longer running 00:13:44.969 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60612) - No such process 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60594 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60594 ']' 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60594 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60594 00:13:44.969 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.970 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.970 killing process with pid 60594 00:13:44.970 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60594' 00:13:44.970 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60594 00:13:44.970 22:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60594 00:13:48.262 00:13:48.262 real 0m4.950s 00:13:48.262 user 0m13.197s 00:13:48.262 sys 0m0.792s 00:13:48.262 22:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.262 ************************************ 00:13:48.262 END TEST locking_overlapped_coremask 00:13:48.262 22:55:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:48.262 ************************************ 00:13:48.262 22:55:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:48.262 22:55:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:48.262 22:55:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.262 22:55:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:48.262 ************************************ 00:13:48.262 START TEST locking_overlapped_coremask_via_rpc 00:13:48.262 ************************************ 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60687 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60687 /var/tmp/spdk.sock 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60687 ']' 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.262 22:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.262 [2024-12-09 22:55:15.140826] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:48.262 [2024-12-09 22:55:15.141022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60687 ] 00:13:48.262 [2024-12-09 22:55:15.330510] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:48.262 [2024-12-09 22:55:15.330577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:48.262 [2024-12-09 22:55:15.483623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.262 [2024-12-09 22:55:15.483730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.262 [2024-12-09 22:55:15.483760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60705 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60705 /var/tmp/spdk2.sock 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60705 ']' 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:49.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.642 22:55:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.642 [2024-12-09 22:55:16.686331] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:49.642 [2024-12-09 22:55:16.686738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60705 ] 00:13:49.642 [2024-12-09 22:55:16.879784] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:49.642 [2024-12-09 22:55:16.879867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:49.901 [2024-12-09 22:55:17.186646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.901 [2024-12-09 22:55:17.190656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.901 [2024-12-09 22:55:17.190688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:52.448 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.449 [2024-12-09 22:55:19.286648] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60687 has claimed it. 00:13:52.449 request: 00:13:52.449 { 00:13:52.449 "method": "framework_enable_cpumask_locks", 00:13:52.449 "req_id": 1 00:13:52.449 } 00:13:52.449 Got JSON-RPC error response 00:13:52.449 response: 00:13:52.449 { 00:13:52.449 "code": -32603, 00:13:52.449 "message": "Failed to claim CPU core: 2" 00:13:52.449 } 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60687 /var/tmp/spdk.sock 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60687 ']' 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60705 /var/tmp/spdk2.sock 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60705 ']' 00:13:52.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.449 ************************************ 00:13:52.449 END TEST locking_overlapped_coremask_via_rpc 00:13:52.449 ************************************ 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:52.449 00:13:52.449 real 0m4.726s 00:13:52.449 user 0m1.331s 00:13:52.449 sys 0m0.259s 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.449 22:55:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.709 22:55:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:52.709 22:55:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60687 ]] 00:13:52.709 22:55:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60687 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60687 ']' 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60687 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60687 00:13:52.709 killing process with pid 60687 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60687' 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60687 00:13:52.709 22:55:19 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60687 00:13:55.248 22:55:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60705 ]] 00:13:55.248 22:55:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60705 00:13:55.248 22:55:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60705 ']' 00:13:55.248 22:55:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60705 00:13:55.248 22:55:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:13:55.248 22:55:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.248 22:55:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60705 00:13:55.507 killing process with pid 60705 00:13:55.507 22:55:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:55.507 22:55:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:55.507 22:55:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60705' 00:13:55.507 22:55:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60705 00:13:55.507 22:55:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60705 00:13:58.045 22:55:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:58.045 22:55:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:58.045 22:55:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60687 ]] 00:13:58.045 22:55:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60687 00:13:58.045 Process with pid 60687 is not found 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60687 ']' 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60687 00:13:58.045 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60687) - No such process 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60687 is not found' 00:13:58.045 22:55:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60705 ]] 00:13:58.045 22:55:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60705 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60705 ']' 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60705 00:13:58.045 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60705) - No such process 00:13:58.045 Process with pid 60705 is not found 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60705 is not found' 00:13:58.045 22:55:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:58.045 00:13:58.045 real 0m57.006s 00:13:58.045 user 1m34.532s 00:13:58.045 sys 0m8.875s 00:13:58.045 ************************************ 00:13:58.045 END TEST cpu_locks 00:13:58.045 ************************************ 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.045 22:55:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:58.045 ************************************ 00:13:58.045 END TEST event 00:13:58.045 ************************************ 00:13:58.045 00:13:58.045 real 1m29.424s 00:13:58.045 user 2m38.288s 00:13:58.045 sys 0m14.198s 00:13:58.045 22:55:25 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.045 22:55:25 event -- common/autotest_common.sh@10 -- # set +x 00:13:58.304 22:55:25 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:58.304 22:55:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.304 22:55:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.304 22:55:25 -- common/autotest_common.sh@10 -- # set +x 00:13:58.304 ************************************ 00:13:58.304 START TEST thread 00:13:58.304 ************************************ 00:13:58.304 22:55:25 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:58.304 * Looking for test storage... 00:13:58.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:58.304 22:55:25 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:58.304 22:55:25 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:13:58.304 22:55:25 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:58.563 22:55:25 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.563 22:55:25 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.563 22:55:25 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.563 22:55:25 thread -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.563 22:55:25 thread -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.563 22:55:25 thread -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.563 22:55:25 thread -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.563 22:55:25 thread -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.563 22:55:25 thread -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.563 22:55:25 thread -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.563 22:55:25 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.563 22:55:25 thread -- scripts/common.sh@344 -- # case "$op" in 00:13:58.563 22:55:25 thread -- scripts/common.sh@345 -- # : 1 00:13:58.563 22:55:25 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.563 22:55:25 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.563 22:55:25 thread -- scripts/common.sh@365 -- # decimal 1 00:13:58.563 22:55:25 thread -- scripts/common.sh@353 -- # local d=1 00:13:58.563 22:55:25 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.563 22:55:25 thread -- scripts/common.sh@355 -- # echo 1 00:13:58.563 22:55:25 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.563 22:55:25 thread -- scripts/common.sh@366 -- # decimal 2 00:13:58.563 22:55:25 thread -- scripts/common.sh@353 -- # local d=2 00:13:58.563 22:55:25 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.563 22:55:25 thread -- scripts/common.sh@355 -- # echo 2 00:13:58.563 22:55:25 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.563 22:55:25 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.563 22:55:25 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.563 22:55:25 thread -- scripts/common.sh@368 -- # return 0 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.563 --rc genhtml_branch_coverage=1 00:13:58.563 --rc genhtml_function_coverage=1 00:13:58.563 --rc genhtml_legend=1 00:13:58.563 --rc geninfo_all_blocks=1 00:13:58.563 --rc geninfo_unexecuted_blocks=1 00:13:58.563 00:13:58.563 ' 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.563 --rc genhtml_branch_coverage=1 00:13:58.563 --rc genhtml_function_coverage=1 00:13:58.563 --rc genhtml_legend=1 00:13:58.563 --rc geninfo_all_blocks=1 00:13:58.563 --rc geninfo_unexecuted_blocks=1 00:13:58.563 00:13:58.563 ' 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.563 --rc genhtml_branch_coverage=1 00:13:58.563 --rc genhtml_function_coverage=1 00:13:58.563 --rc genhtml_legend=1 00:13:58.563 --rc geninfo_all_blocks=1 00:13:58.563 --rc geninfo_unexecuted_blocks=1 00:13:58.563 00:13:58.563 ' 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:58.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.563 --rc genhtml_branch_coverage=1 00:13:58.563 --rc genhtml_function_coverage=1 00:13:58.563 --rc genhtml_legend=1 00:13:58.563 --rc geninfo_all_blocks=1 00:13:58.563 --rc geninfo_unexecuted_blocks=1 00:13:58.563 00:13:58.563 ' 00:13:58.563 22:55:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.563 22:55:25 thread -- common/autotest_common.sh@10 -- # set +x 00:13:58.563 ************************************ 00:13:58.563 START TEST thread_poller_perf 00:13:58.563 ************************************ 00:13:58.563 22:55:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:58.563 [2024-12-09 22:55:25.724791] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:13:58.563 [2024-12-09 22:55:25.725221] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60911 ] 00:13:58.823 [2024-12-09 22:55:25.931326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.823 [2024-12-09 22:55:26.079781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.823 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:14:00.272 [2024-12-09T22:55:27.608Z] ====================================== 00:14:00.272 [2024-12-09T22:55:27.608Z] busy:2501792804 (cyc) 00:14:00.272 [2024-12-09T22:55:27.608Z] total_run_count: 369000 00:14:00.272 [2024-12-09T22:55:27.608Z] tsc_hz: 2490000000 (cyc) 00:14:00.272 [2024-12-09T22:55:27.608Z] ====================================== 00:14:00.272 [2024-12-09T22:55:27.608Z] poller_cost: 6779 (cyc), 2722 (nsec) 00:14:00.272 00:14:00.272 ************************************ 00:14:00.272 END TEST thread_poller_perf 00:14:00.272 ************************************ 00:14:00.272 real 0m1.673s 00:14:00.272 user 0m1.426s 00:14:00.272 sys 0m0.135s 00:14:00.272 22:55:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.272 22:55:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:14:00.272 22:55:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:00.272 22:55:27 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:14:00.272 22:55:27 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.272 22:55:27 thread -- common/autotest_common.sh@10 -- # set +x 00:14:00.272 ************************************ 00:14:00.272 START TEST thread_poller_perf 00:14:00.272 ************************************ 00:14:00.272 22:55:27 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:00.272 [2024-12-09 22:55:27.477499] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:00.272 [2024-12-09 22:55:27.477628] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60948 ] 00:14:00.531 [2024-12-09 22:55:27.646517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.531 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:14:00.531 [2024-12-09 22:55:27.802878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.907 [2024-12-09T22:55:29.243Z] ====================================== 00:14:01.907 [2024-12-09T22:55:29.243Z] busy:2494875096 (cyc) 00:14:01.907 [2024-12-09T22:55:29.243Z] total_run_count: 4184000 00:14:01.907 [2024-12-09T22:55:29.243Z] tsc_hz: 2490000000 (cyc) 00:14:01.907 [2024-12-09T22:55:29.243Z] ====================================== 00:14:01.907 [2024-12-09T22:55:29.243Z] poller_cost: 596 (cyc), 239 (nsec) 00:14:01.907 00:14:01.907 real 0m1.639s 00:14:01.907 user 0m1.415s 00:14:01.907 sys 0m0.114s 00:14:01.907 22:55:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.907 22:55:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:14:01.907 ************************************ 00:14:01.907 END TEST thread_poller_perf 00:14:01.907 ************************************ 00:14:01.907 22:55:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:14:01.907 ************************************ 00:14:01.907 END TEST thread 00:14:01.907 ************************************ 00:14:01.907 00:14:01.907 real 0m3.705s 00:14:01.907 user 0m3.011s 00:14:01.907 sys 0m0.477s 00:14:01.907 22:55:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.907 22:55:29 thread -- common/autotest_common.sh@10 -- # set +x 00:14:01.907 22:55:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:14:01.907 22:55:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:01.907 22:55:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:01.907 22:55:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.907 22:55:29 -- common/autotest_common.sh@10 -- # set +x 00:14:01.907 ************************************ 00:14:01.907 START TEST app_cmdline 00:14:01.907 ************************************ 00:14:01.907 22:55:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:14:02.165 * Looking for test storage... 00:14:02.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:02.165 22:55:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:02.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.165 --rc genhtml_branch_coverage=1 00:14:02.165 --rc genhtml_function_coverage=1 00:14:02.165 --rc genhtml_legend=1 00:14:02.165 --rc geninfo_all_blocks=1 00:14:02.165 --rc geninfo_unexecuted_blocks=1 00:14:02.165 00:14:02.165 ' 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:02.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.165 --rc genhtml_branch_coverage=1 00:14:02.165 --rc genhtml_function_coverage=1 00:14:02.165 --rc genhtml_legend=1 00:14:02.165 --rc geninfo_all_blocks=1 00:14:02.165 --rc geninfo_unexecuted_blocks=1 00:14:02.165 00:14:02.165 ' 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:02.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.165 --rc genhtml_branch_coverage=1 00:14:02.165 --rc genhtml_function_coverage=1 00:14:02.165 --rc genhtml_legend=1 00:14:02.165 --rc geninfo_all_blocks=1 00:14:02.165 --rc geninfo_unexecuted_blocks=1 00:14:02.165 00:14:02.165 ' 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:02.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.165 --rc genhtml_branch_coverage=1 00:14:02.165 --rc genhtml_function_coverage=1 00:14:02.165 --rc genhtml_legend=1 00:14:02.165 --rc geninfo_all_blocks=1 00:14:02.165 --rc geninfo_unexecuted_blocks=1 00:14:02.165 00:14:02.165 ' 00:14:02.165 22:55:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:14:02.165 22:55:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61031 00:14:02.165 22:55:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:14:02.165 22:55:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61031 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61031 ']' 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.165 22:55:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:02.424 [2024-12-09 22:55:29.526314] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:02.424 [2024-12-09 22:55:29.526805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61031 ] 00:14:02.424 [2024-12-09 22:55:29.713180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.683 [2024-12-09 22:55:29.867740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.620 22:55:30 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.620 22:55:30 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:14:03.620 22:55:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:14:03.879 { 00:14:03.879 "version": "SPDK v25.01-pre git sha1 f80471632", 00:14:03.879 "fields": { 00:14:03.879 "major": 25, 00:14:03.879 "minor": 1, 00:14:03.879 "patch": 0, 00:14:03.879 "suffix": "-pre", 00:14:03.879 "commit": "f80471632" 00:14:03.879 } 00:14:03.879 } 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:14:03.879 22:55:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:03.879 22:55:31 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:04.138 request: 00:14:04.138 { 00:14:04.138 "method": "env_dpdk_get_mem_stats", 00:14:04.138 "req_id": 1 00:14:04.138 } 00:14:04.138 Got JSON-RPC error response 00:14:04.138 response: 00:14:04.138 { 00:14:04.138 "code": -32601, 00:14:04.138 "message": "Method not found" 00:14:04.138 } 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.138 22:55:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61031 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61031 ']' 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61031 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61031 00:14:04.138 killing process with pid 61031 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61031' 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@973 -- # kill 61031 00:14:04.138 22:55:31 app_cmdline -- common/autotest_common.sh@978 -- # wait 61031 00:14:06.730 00:14:06.730 real 0m4.720s 00:14:06.730 user 0m4.844s 00:14:06.730 sys 0m0.768s 00:14:06.730 22:55:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.730 22:55:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:06.730 ************************************ 00:14:06.730 END TEST app_cmdline 00:14:06.730 ************************************ 00:14:06.730 22:55:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:06.730 22:55:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:06.730 22:55:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.730 22:55:33 -- common/autotest_common.sh@10 -- # set +x 00:14:06.730 ************************************ 00:14:06.730 START TEST version 00:14:06.730 ************************************ 00:14:06.730 22:55:33 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:14:06.990 * Looking for test storage... 00:14:06.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1711 -- # lcov --version 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:06.990 22:55:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.990 22:55:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.990 22:55:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.990 22:55:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.990 22:55:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.990 22:55:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.990 22:55:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.990 22:55:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.990 22:55:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.990 22:55:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.990 22:55:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.990 22:55:34 version -- scripts/common.sh@344 -- # case "$op" in 00:14:06.990 22:55:34 version -- scripts/common.sh@345 -- # : 1 00:14:06.990 22:55:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.990 22:55:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.990 22:55:34 version -- scripts/common.sh@365 -- # decimal 1 00:14:06.990 22:55:34 version -- scripts/common.sh@353 -- # local d=1 00:14:06.990 22:55:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.990 22:55:34 version -- scripts/common.sh@355 -- # echo 1 00:14:06.990 22:55:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.990 22:55:34 version -- scripts/common.sh@366 -- # decimal 2 00:14:06.990 22:55:34 version -- scripts/common.sh@353 -- # local d=2 00:14:06.990 22:55:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.990 22:55:34 version -- scripts/common.sh@355 -- # echo 2 00:14:06.990 22:55:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.990 22:55:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.990 22:55:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.990 22:55:34 version -- scripts/common.sh@368 -- # return 0 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:06.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.990 --rc genhtml_branch_coverage=1 00:14:06.990 --rc genhtml_function_coverage=1 00:14:06.990 --rc genhtml_legend=1 00:14:06.990 --rc geninfo_all_blocks=1 00:14:06.990 --rc geninfo_unexecuted_blocks=1 00:14:06.990 00:14:06.990 ' 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:06.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.990 --rc genhtml_branch_coverage=1 00:14:06.990 --rc genhtml_function_coverage=1 00:14:06.990 --rc genhtml_legend=1 00:14:06.990 --rc geninfo_all_blocks=1 00:14:06.990 --rc geninfo_unexecuted_blocks=1 00:14:06.990 00:14:06.990 ' 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:06.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.990 --rc genhtml_branch_coverage=1 00:14:06.990 --rc genhtml_function_coverage=1 00:14:06.990 --rc genhtml_legend=1 00:14:06.990 --rc geninfo_all_blocks=1 00:14:06.990 --rc geninfo_unexecuted_blocks=1 00:14:06.990 00:14:06.990 ' 00:14:06.990 22:55:34 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:06.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.990 --rc genhtml_branch_coverage=1 00:14:06.990 --rc genhtml_function_coverage=1 00:14:06.990 --rc genhtml_legend=1 00:14:06.990 --rc geninfo_all_blocks=1 00:14:06.990 --rc geninfo_unexecuted_blocks=1 00:14:06.990 00:14:06.990 ' 00:14:06.990 22:55:34 version -- app/version.sh@17 -- # get_header_version major 00:14:06.990 22:55:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:06.990 22:55:34 version -- app/version.sh@14 -- # cut -f2 00:14:06.990 22:55:34 version -- app/version.sh@14 -- # tr -d '"' 00:14:06.990 22:55:34 version -- app/version.sh@17 -- # major=25 00:14:06.990 22:55:34 version -- app/version.sh@18 -- # get_header_version minor 00:14:06.990 22:55:34 version -- app/version.sh@14 -- # cut -f2 00:14:06.991 22:55:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:06.991 22:55:34 version -- app/version.sh@14 -- # tr -d '"' 00:14:06.991 22:55:34 version -- app/version.sh@18 -- # minor=1 00:14:06.991 22:55:34 version -- app/version.sh@19 -- # get_header_version patch 00:14:06.991 22:55:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:06.991 22:55:34 version -- app/version.sh@14 -- # cut -f2 00:14:06.991 22:55:34 version -- app/version.sh@14 -- # tr -d '"' 00:14:06.991 22:55:34 version -- app/version.sh@19 -- # patch=0 00:14:06.991 22:55:34 version -- app/version.sh@20 -- # get_header_version suffix 00:14:06.991 22:55:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:14:06.991 22:55:34 version -- app/version.sh@14 -- # cut -f2 00:14:06.991 22:55:34 version -- app/version.sh@14 -- # tr -d '"' 00:14:06.991 22:55:34 version -- app/version.sh@20 -- # suffix=-pre 00:14:06.991 22:55:34 version -- app/version.sh@22 -- # version=25.1 00:14:06.991 22:55:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:14:06.991 22:55:34 version -- app/version.sh@28 -- # version=25.1rc0 00:14:06.991 22:55:34 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:06.991 22:55:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:14:06.991 22:55:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:14:06.991 22:55:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:14:06.991 ************************************ 00:14:06.991 END TEST version 00:14:06.991 ************************************ 00:14:06.991 00:14:06.991 real 0m0.309s 00:14:06.991 user 0m0.174s 00:14:06.991 sys 0m0.194s 00:14:06.991 22:55:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.991 22:55:34 version -- common/autotest_common.sh@10 -- # set +x 00:14:07.251 22:55:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:14:07.251 22:55:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:14:07.251 22:55:34 -- spdk/autotest.sh@194 -- # uname -s 00:14:07.251 22:55:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:14:07.251 22:55:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:07.251 22:55:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:07.251 22:55:34 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:14:07.251 22:55:34 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:14:07.251 22:55:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:07.251 22:55:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.251 22:55:34 -- common/autotest_common.sh@10 -- # set +x 00:14:07.251 ************************************ 00:14:07.251 START TEST blockdev_nvme 00:14:07.251 ************************************ 00:14:07.251 22:55:34 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:14:07.251 * Looking for test storage... 00:14:07.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:07.251 22:55:34 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:07.251 22:55:34 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:07.251 22:55:34 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:07.251 22:55:34 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:14:07.251 22:55:34 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:07.510 22:55:34 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:14:07.510 22:55:34 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:07.510 22:55:34 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:07.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.510 --rc genhtml_branch_coverage=1 00:14:07.511 --rc genhtml_function_coverage=1 00:14:07.511 --rc genhtml_legend=1 00:14:07.511 --rc geninfo_all_blocks=1 00:14:07.511 --rc geninfo_unexecuted_blocks=1 00:14:07.511 00:14:07.511 ' 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:07.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.511 --rc genhtml_branch_coverage=1 00:14:07.511 --rc genhtml_function_coverage=1 00:14:07.511 --rc genhtml_legend=1 00:14:07.511 --rc geninfo_all_blocks=1 00:14:07.511 --rc geninfo_unexecuted_blocks=1 00:14:07.511 00:14:07.511 ' 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:07.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.511 --rc genhtml_branch_coverage=1 00:14:07.511 --rc genhtml_function_coverage=1 00:14:07.511 --rc genhtml_legend=1 00:14:07.511 --rc geninfo_all_blocks=1 00:14:07.511 --rc geninfo_unexecuted_blocks=1 00:14:07.511 00:14:07.511 ' 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:07.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:07.511 --rc genhtml_branch_coverage=1 00:14:07.511 --rc genhtml_function_coverage=1 00:14:07.511 --rc genhtml_legend=1 00:14:07.511 --rc geninfo_all_blocks=1 00:14:07.511 --rc geninfo_unexecuted_blocks=1 00:14:07.511 00:14:07.511 ' 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:07.511 22:55:34 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61230 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:07.511 22:55:34 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61230 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61230 ']' 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.511 22:55:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.511 [2024-12-09 22:55:34.741752] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:07.511 [2024-12-09 22:55:34.741889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61230 ] 00:14:07.770 [2024-12-09 22:55:34.923635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.770 [2024-12-09 22:55:35.068262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.705 22:55:35 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.705 22:55:35 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:14:08.705 22:55:35 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:08.705 22:55:36 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:14:08.705 22:55:36 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:14:08.705 22:55:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:14:08.705 22:55:36 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:08.964 22:55:36 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:14:08.964 22:55:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.964 22:55:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.223 22:55:36 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:14:09.223 22:55:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.482 22:55:36 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.482 22:55:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:14:09.482 22:55:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:14:09.482 22:55:36 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "39017b6c-b9b6-4697-8534-aafdf48b11ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "39017b6c-b9b6-4697-8534-aafdf48b11ed",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1c47a0bf-9a8f-4cd4-a41e-0aabc9d1edd0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1c47a0bf-9a8f-4cd4-a41e-0aabc9d1edd0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2b6ecae4-9234-45a5-ab74-003ec90ab431"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2b6ecae4-9234-45a5-ab74-003ec90ab431",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "7ca8a67e-0496-4c0f-a66d-b57ed31d0066"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7ca8a67e-0496-4c0f-a66d-b57ed31d0066",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d1806c71-7695-4f0b-873a-8dfcb8c6e735"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d1806c71-7695-4f0b-873a-8dfcb8c6e735",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "b48feb42-c235-44f5-839d-be2c504d2246"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b48feb42-c235-44f5-839d-be2c504d2246",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:14:09.483 22:55:36 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:14:09.483 22:55:36 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:14:09.483 22:55:36 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:14:09.483 22:55:36 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61230 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61230 ']' 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61230 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61230 00:14:09.483 killing process with pid 61230 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61230' 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61230 00:14:09.483 22:55:36 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61230 00:14:12.015 22:55:39 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:12.015 22:55:39 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:12.015 22:55:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:12.015 22:55:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.015 22:55:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.015 ************************************ 00:14:12.015 START TEST bdev_hello_world 00:14:12.015 ************************************ 00:14:12.015 22:55:39 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:14:12.015 [2024-12-09 22:55:39.289550] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:12.015 [2024-12-09 22:55:39.289683] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61326 ] 00:14:12.274 [2024-12-09 22:55:39.471013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.274 [2024-12-09 22:55:39.601881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.209 [2024-12-09 22:55:40.313324] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:13.209 [2024-12-09 22:55:40.313399] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:14:13.209 [2024-12-09 22:55:40.313430] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:13.209 [2024-12-09 22:55:40.316634] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:13.209 [2024-12-09 22:55:40.317369] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:13.209 [2024-12-09 22:55:40.317406] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:13.209 [2024-12-09 22:55:40.317653] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:13.209 00:14:13.209 [2024-12-09 22:55:40.317681] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:14.586 00:14:14.586 real 0m2.328s 00:14:14.586 user 0m1.920s 00:14:14.586 sys 0m0.298s 00:14:14.586 ************************************ 00:14:14.586 END TEST bdev_hello_world 00:14:14.586 ************************************ 00:14:14.586 22:55:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.586 22:55:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:14.586 22:55:41 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:14:14.586 22:55:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:14.586 22:55:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.586 22:55:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.586 ************************************ 00:14:14.586 START TEST bdev_bounds 00:14:14.586 ************************************ 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61373 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61373' 00:14:14.586 Process bdevio pid: 61373 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61373 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61373 ']' 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.586 22:55:41 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:14.586 [2024-12-09 22:55:41.709553] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:14.586 [2024-12-09 22:55:41.709686] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61373 ] 00:14:14.586 [2024-12-09 22:55:41.896535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.845 [2024-12-09 22:55:42.058499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.845 [2024-12-09 22:55:42.058674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.845 [2024-12-09 22:55:42.058698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.470 22:55:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.470 22:55:42 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:15.470 22:55:42 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:15.729 I/O targets: 00:14:15.729 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:15.729 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:15.729 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:15.729 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:15.729 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:15.729 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:15.729 00:14:15.729 00:14:15.729 CUnit - A unit testing framework for C - Version 2.1-3 00:14:15.729 http://cunit.sourceforge.net/ 00:14:15.729 00:14:15.729 00:14:15.729 Suite: bdevio tests on: Nvme3n1 00:14:15.729 Test: blockdev write read block ...passed 00:14:15.729 Test: blockdev write zeroes read block ...passed 00:14:15.729 Test: blockdev write zeroes read no split ...passed 00:14:15.729 Test: blockdev write zeroes read split ...passed 00:14:15.729 Test: blockdev write zeroes read split partial ...passed 00:14:15.729 Test: blockdev reset ...[2024-12-09 22:55:42.968092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:14:15.729 passed 00:14:15.729 Test: blockdev write read 8 blocks ...[2024-12-09 22:55:42.972526] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:14:15.729 passed 00:14:15.729 Test: blockdev write read size > 128k ...passed 00:14:15.729 Test: blockdev write read invalid size ...passed 00:14:15.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:15.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:15.729 Test: blockdev write read max offset ...passed 00:14:15.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:15.729 Test: blockdev writev readv 8 blocks ...passed 00:14:15.729 Test: blockdev writev readv 30 x 1block ...passed 00:14:15.729 Test: blockdev writev readv block ...passed 00:14:15.729 Test: blockdev writev readv size > 128k ...passed 00:14:15.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:15.729 Test: blockdev comparev and writev ...[2024-12-09 22:55:42.981748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2be60a000 len:0x1000 00:14:15.729 [2024-12-09 22:55:42.981803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:15.729 passed 00:14:15.729 Test: blockdev nvme passthru rw ...passed 00:14:15.729 Test: blockdev nvme passthru vendor specific ...[2024-12-09 22:55:42.982885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:14:15.729 Test: blockdev nvme admin passthru ...RP2 0x0 00:14:15.729 [2024-12-09 22:55:42.983035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:15.729 passed 00:14:15.729 Test: blockdev copy ...passed 00:14:15.729 Suite: bdevio tests on: Nvme2n3 00:14:15.729 Test: blockdev write read block ...passed 00:14:15.729 Test: blockdev write zeroes read block ...passed 00:14:15.729 Test: blockdev write zeroes read no split ...passed 00:14:15.729 Test: blockdev write zeroes read split ...passed 00:14:15.729 Test: blockdev write zeroes read split partial ...passed 00:14:15.729 Test: blockdev reset ...[2024-12-09 22:55:43.063599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:14:15.989 passed 00:14:15.989 Test: blockdev write read 8 blocks ...[2024-12-09 22:55:43.067691] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:14:15.989 passed 00:14:15.989 Test: blockdev write read size > 128k ...passed 00:14:15.989 Test: blockdev write read invalid size ...passed 00:14:15.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:15.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:15.989 Test: blockdev write read max offset ...passed 00:14:15.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:15.989 Test: blockdev writev readv 8 blocks ...passed 00:14:15.989 Test: blockdev writev readv 30 x 1block ...passed 00:14:15.989 Test: blockdev writev readv block ...passed 00:14:15.989 Test: blockdev writev readv size > 128k ...passed 00:14:15.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:15.989 Test: blockdev comparev and writev ...[2024-12-09 22:55:43.076283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a1006000 len:0x1000 00:14:15.989 [2024-12-09 22:55:43.076333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:15.989 passed 00:14:15.989 Test: blockdev nvme passthru rw ...passed 00:14:15.989 Test: blockdev nvme passthru vendor specific ...passed 00:14:15.989 Test: blockdev nvme admin passthru ...[2024-12-09 22:55:43.077240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:15.989 [2024-12-09 22:55:43.077281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:15.989 passed 00:14:15.989 Test: blockdev copy ...passed 00:14:15.989 Suite: bdevio tests on: Nvme2n2 00:14:15.989 Test: blockdev write read block ...passed 00:14:15.989 Test: blockdev write zeroes read block ...passed 00:14:15.989 Test: blockdev write zeroes read no split ...passed 00:14:15.989 Test: blockdev write zeroes read split ...passed 00:14:15.989 Test: blockdev write zeroes read split partial ...passed 00:14:15.989 Test: blockdev reset ...[2024-12-09 22:55:43.152808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:14:15.989 [2024-12-09 22:55:43.157049] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:14:15.989 Test: blockdev write read 8 blocks ...uccessful. 00:14:15.989 passed 00:14:15.989 Test: blockdev write read size > 128k ...passed 00:14:15.989 Test: blockdev write read invalid size ...passed 00:14:15.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:15.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:15.989 Test: blockdev write read max offset ...passed 00:14:15.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:15.989 Test: blockdev writev readv 8 blocks ...passed 00:14:15.989 Test: blockdev writev readv 30 x 1block ...passed 00:14:15.989 Test: blockdev writev readv block ...passed 00:14:15.989 Test: blockdev writev readv size > 128k ...passed 00:14:15.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:15.989 Test: blockdev comparev and writev ...[2024-12-09 22:55:43.166079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:14:15.989 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ce63c000 len:0x1000 00:14:15.989 [2024-12-09 22:55:43.166231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:15.989 passed 00:14:15.989 Test: blockdev nvme passthru vendor specific ...passed 00:14:15.989 Test: blockdev nvme admin passthru ...[2024-12-09 22:55:43.167598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:15.989 [2024-12-09 22:55:43.167637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:15.989 passed 00:14:15.989 Test: blockdev copy ...passed 00:14:15.989 Suite: bdevio tests on: Nvme2n1 00:14:15.989 Test: blockdev write read block ...passed 00:14:15.989 Test: blockdev write zeroes read block ...passed 00:14:15.989 Test: blockdev write zeroes read no split ...passed 00:14:15.989 Test: blockdev write zeroes read split ...passed 00:14:15.989 Test: blockdev write zeroes read split partial ...passed 00:14:15.989 Test: blockdev reset ...[2024-12-09 22:55:43.245267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:14:15.989 [2024-12-09 22:55:43.249430] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:14:15.989 Test: blockdev write read 8 blocks ...uccessful. 00:14:15.989 passed 00:14:15.989 Test: blockdev write read size > 128k ...passed 00:14:15.989 Test: blockdev write read invalid size ...passed 00:14:15.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:15.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:15.989 Test: blockdev write read max offset ...passed 00:14:15.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:15.989 Test: blockdev writev readv 8 blocks ...passed 00:14:15.989 Test: blockdev writev readv 30 x 1block ...passed 00:14:15.989 Test: blockdev writev readv block ...passed 00:14:15.989 Test: blockdev writev readv size > 128k ...passed 00:14:15.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:15.989 Test: blockdev comparev and writev ...[2024-12-09 22:55:43.259370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce638000 len:0x1000 00:14:15.989 [2024-12-09 22:55:43.259576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:15.989 passed 00:14:15.989 Test: blockdev nvme passthru rw ...passed 00:14:15.989 Test: blockdev nvme passthru vendor specific ...[2024-12-09 22:55:43.260611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:15.989 [2024-12-09 22:55:43.260758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:15.989 passed 00:14:15.989 Test: blockdev nvme admin passthru ...passed 00:14:15.989 Test: blockdev copy ...passed 00:14:15.989 Suite: bdevio tests on: Nvme1n1 00:14:15.989 Test: blockdev write read block ...passed 00:14:15.989 Test: blockdev write zeroes read block ...passed 00:14:15.989 Test: blockdev write zeroes read no split ...passed 00:14:15.989 Test: blockdev write zeroes read split ...passed 00:14:16.248 Test: blockdev write zeroes read split partial ...passed 00:14:16.248 Test: blockdev reset ...[2024-12-09 22:55:43.342120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:14:16.248 passed 00:14:16.248 Test: blockdev write read 8 blocks ...[2024-12-09 22:55:43.345926] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:14:16.248 passed 00:14:16.248 Test: blockdev write read size > 128k ...passed 00:14:16.248 Test: blockdev write read invalid size ...passed 00:14:16.248 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.248 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.248 Test: blockdev write read max offset ...passed 00:14:16.248 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.248 Test: blockdev writev readv 8 blocks ...passed 00:14:16.248 Test: blockdev writev readv 30 x 1block ...passed 00:14:16.248 Test: blockdev writev readv block ...passed 00:14:16.248 Test: blockdev writev readv size > 128k ...passed 00:14:16.248 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:16.248 Test: blockdev comparev and writev ...[2024-12-09 22:55:43.353945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:14:16.248 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2ce634000 len:0x1000 00:14:16.248 [2024-12-09 22:55:43.354094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:14:16.248 passed 00:14:16.248 Test: blockdev nvme passthru vendor specific ...passed 00:14:16.248 Test: blockdev nvme admin passthru ...[2024-12-09 22:55:43.354949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:14:16.248 [2024-12-09 22:55:43.354987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:14:16.248 passed 00:14:16.248 Test: blockdev copy ...passed 00:14:16.248 Suite: bdevio tests on: Nvme0n1 00:14:16.248 Test: blockdev write read block ...passed 00:14:16.248 Test: blockdev write zeroes read block ...passed 00:14:16.248 Test: blockdev write zeroes read no split ...passed 00:14:16.248 Test: blockdev write zeroes read split ...passed 00:14:16.248 Test: blockdev write zeroes read split partial ...passed 00:14:16.248 Test: blockdev reset ...[2024-12-09 22:55:43.433637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:14:16.248 [2024-12-09 22:55:43.437566] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:14:16.248 Test: blockdev write read 8 blocks ...uccessful. 00:14:16.248 passed 00:14:16.248 Test: blockdev write read size > 128k ...passed 00:14:16.248 Test: blockdev write read invalid size ...passed 00:14:16.248 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:16.248 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:16.248 Test: blockdev write read max offset ...passed 00:14:16.248 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:16.248 Test: blockdev writev readv 8 blocks ...passed 00:14:16.248 Test: blockdev writev readv 30 x 1block ...passed 00:14:16.248 Test: blockdev writev readv block ...passed 00:14:16.249 Test: blockdev writev readv size > 128k ...passed 00:14:16.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:16.249 Test: blockdev comparev and writev ...passed 00:14:16.249 Test: blockdev nvme passthru rw ...[2024-12-09 22:55:43.446318] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:14:16.249 separate metadata which is not supported yet. 00:14:16.249 passed 00:14:16.249 Test: blockdev nvme passthru vendor specific ...[2024-12-09 22:55:43.447186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:14:16.249 [2024-12-09 22:55:43.447352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed sqhd:0017 p:1 m:0 dnr:1 00:14:16.249 00:14:16.249 Test: blockdev nvme admin passthru ...passed 00:14:16.249 Test: blockdev copy ...passed 00:14:16.249 00:14:16.249 Run Summary: Type Total Ran Passed Failed Inactive 00:14:16.249 suites 6 6 n/a 0 0 00:14:16.249 tests 138 138 138 0 0 00:14:16.249 asserts 893 893 893 0 n/a 00:14:16.249 00:14:16.249 Elapsed time = 1.538 seconds 00:14:16.249 0 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61373 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61373 ']' 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61373 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61373 00:14:16.249 killing process with pid 61373 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61373' 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61373 00:14:16.249 22:55:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61373 00:14:17.625 22:55:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:17.625 00:14:17.625 real 0m3.010s 00:14:17.625 user 0m7.562s 00:14:17.625 sys 0m0.464s 00:14:17.625 22:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.625 ************************************ 00:14:17.625 END TEST bdev_bounds 00:14:17.625 ************************************ 00:14:17.625 22:55:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:17.625 22:55:44 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:14:17.625 22:55:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:17.625 22:55:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.625 22:55:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.625 ************************************ 00:14:17.625 START TEST bdev_nbd 00:14:17.625 ************************************ 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61437 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61437 /var/tmp/spdk-nbd.sock 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61437 ']' 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.625 22:55:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:17.625 [2024-12-09 22:55:44.804499] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:17.625 [2024-12-09 22:55:44.804639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.884 [2024-12-09 22:55:44.988296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.884 [2024-12-09 22:55:45.130979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.819 22:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.819 22:55:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:18.819 22:55:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:18.819 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:18.819 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:18.819 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:18.819 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:18.820 22:55:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.077 1+0 records in 00:14:19.077 1+0 records out 00:14:19.077 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486453 s, 8.4 MB/s 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:19.077 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.335 1+0 records in 00:14:19.335 1+0 records out 00:14:19.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454501 s, 9.0 MB/s 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:19.335 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:19.336 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:19.336 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.595 1+0 records in 00:14:19.595 1+0 records out 00:14:19.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044779 s, 9.1 MB/s 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:19.595 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:14:19.854 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:19.854 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:19.854 22:55:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.854 1+0 records in 00:14:19.854 1+0 records out 00:14:19.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565212 s, 7.2 MB/s 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:19.854 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.112 1+0 records in 00:14:20.112 1+0 records out 00:14:20.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632583 s, 6.5 MB/s 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:20.112 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.371 1+0 records in 00:14:20.371 1+0 records out 00:14:20.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455269 s, 9.0 MB/s 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:20.371 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd0", 00:14:20.630 "bdev_name": "Nvme0n1" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd1", 00:14:20.630 "bdev_name": "Nvme1n1" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd2", 00:14:20.630 "bdev_name": "Nvme2n1" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd3", 00:14:20.630 "bdev_name": "Nvme2n2" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd4", 00:14:20.630 "bdev_name": "Nvme2n3" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd5", 00:14:20.630 "bdev_name": "Nvme3n1" 00:14:20.630 } 00:14:20.630 ]' 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd0", 00:14:20.630 "bdev_name": "Nvme0n1" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd1", 00:14:20.630 "bdev_name": "Nvme1n1" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd2", 00:14:20.630 "bdev_name": "Nvme2n1" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd3", 00:14:20.630 "bdev_name": "Nvme2n2" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd4", 00:14:20.630 "bdev_name": "Nvme2n3" 00:14:20.630 }, 00:14:20.630 { 00:14:20.630 "nbd_device": "/dev/nbd5", 00:14:20.630 "bdev_name": "Nvme3n1" 00:14:20.630 } 00:14:20.630 ]' 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.630 22:55:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.890 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.148 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.407 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.665 22:55:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.923 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:22.180 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:22.440 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:14:22.699 /dev/nbd0 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.699 1+0 records in 00:14:22.699 1+0 records out 00:14:22.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602271 s, 6.8 MB/s 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:22.699 22:55:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:14:22.957 /dev/nbd1 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.957 1+0 records in 00:14:22.957 1+0 records out 00:14:22.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621072 s, 6.6 MB/s 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:22.957 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:14:23.216 /dev/nbd10 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.216 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.216 1+0 records in 00:14:23.216 1+0 records out 00:14:23.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059705 s, 6.9 MB/s 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:23.474 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:14:23.732 /dev/nbd11 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.732 1+0 records in 00:14:23.732 1+0 records out 00:14:23.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710031 s, 5.8 MB/s 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:23.732 22:55:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:14:23.991 /dev/nbd12 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.991 1+0 records in 00:14:23.991 1+0 records out 00:14:23.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547581 s, 7.5 MB/s 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:23.991 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:14:24.249 /dev/nbd13 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.249 1+0 records in 00:14:24.249 1+0 records out 00:14:24.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536622 s, 7.6 MB/s 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:24.249 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd0", 00:14:24.508 "bdev_name": "Nvme0n1" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd1", 00:14:24.508 "bdev_name": "Nvme1n1" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd10", 00:14:24.508 "bdev_name": "Nvme2n1" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd11", 00:14:24.508 "bdev_name": "Nvme2n2" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd12", 00:14:24.508 "bdev_name": "Nvme2n3" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd13", 00:14:24.508 "bdev_name": "Nvme3n1" 00:14:24.508 } 00:14:24.508 ]' 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd0", 00:14:24.508 "bdev_name": "Nvme0n1" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd1", 00:14:24.508 "bdev_name": "Nvme1n1" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd10", 00:14:24.508 "bdev_name": "Nvme2n1" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd11", 00:14:24.508 "bdev_name": "Nvme2n2" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd12", 00:14:24.508 "bdev_name": "Nvme2n3" 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "nbd_device": "/dev/nbd13", 00:14:24.508 "bdev_name": "Nvme3n1" 00:14:24.508 } 00:14:24.508 ]' 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:24.508 /dev/nbd1 00:14:24.508 /dev/nbd10 00:14:24.508 /dev/nbd11 00:14:24.508 /dev/nbd12 00:14:24.508 /dev/nbd13' 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:24.508 /dev/nbd1 00:14:24.508 /dev/nbd10 00:14:24.508 /dev/nbd11 00:14:24.508 /dev/nbd12 00:14:24.508 /dev/nbd13' 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:24.508 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:24.765 256+0 records in 00:14:24.765 256+0 records out 00:14:24.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118266 s, 88.7 MB/s 00:14:24.766 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:24.766 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:24.766 256+0 records in 00:14:24.766 256+0 records out 00:14:24.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113425 s, 9.2 MB/s 00:14:24.766 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:24.766 22:55:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:24.766 256+0 records in 00:14:24.766 256+0 records out 00:14:24.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120191 s, 8.7 MB/s 00:14:24.766 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:24.766 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:25.023 256+0 records in 00:14:25.023 256+0 records out 00:14:25.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130121 s, 8.1 MB/s 00:14:25.023 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:25.023 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:25.281 256+0 records in 00:14:25.281 256+0 records out 00:14:25.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122828 s, 8.5 MB/s 00:14:25.281 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:25.281 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:25.281 256+0 records in 00:14:25.281 256+0 records out 00:14:25.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123094 s, 8.5 MB/s 00:14:25.281 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:25.281 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:25.540 256+0 records in 00:14:25.540 256+0 records out 00:14:25.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12803 s, 8.2 MB/s 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:25.540 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.541 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.799 22:55:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.058 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.317 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:26.576 22:55:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.834 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:27.093 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:27.351 malloc_lvol_verify 00:14:27.351 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:27.610 c89b788c-79f7-425a-9510-3a2244ec963e 00:14:27.610 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:27.869 54c53624-1116-4b83-9ec3-06d0407d9a33 00:14:27.869 22:55:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:27.869 /dev/nbd0 00:14:27.869 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:27.869 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:27.869 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:27.869 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:27.869 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:27.869 mke2fs 1.47.0 (5-Feb-2023) 00:14:27.869 Discarding device blocks: 0/4096 done 00:14:27.869 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:27.869 00:14:27.869 Allocating group tables: 0/1 done 00:14:27.869 Writing inode tables: 0/1 done 00:14:27.869 Creating journal (1024 blocks): done 00:14:28.127 Writing superblocks and filesystem accounting information: 0/1 done 00:14:28.127 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61437 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61437 ']' 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61437 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.127 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61437 00:14:28.403 killing process with pid 61437 00:14:28.403 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.403 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.403 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61437' 00:14:28.403 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61437 00:14:28.403 22:55:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61437 00:14:29.783 ************************************ 00:14:29.783 END TEST bdev_nbd 00:14:29.783 ************************************ 00:14:29.783 22:55:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:29.783 00:14:29.783 real 0m12.073s 00:14:29.783 user 0m15.826s 00:14:29.783 sys 0m4.866s 00:14:29.783 22:55:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.783 22:55:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:29.783 22:55:56 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:14:29.783 22:55:56 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:14:29.783 skipping fio tests on NVMe due to multi-ns failures. 00:14:29.783 22:55:56 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:14:29.783 22:55:56 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:29.783 22:55:56 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:29.783 22:55:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:29.783 22:55:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.783 22:55:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.783 ************************************ 00:14:29.783 START TEST bdev_verify 00:14:29.783 ************************************ 00:14:29.783 22:55:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:29.783 [2024-12-09 22:55:56.937863] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:29.783 [2024-12-09 22:55:56.938005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:14:30.043 [2024-12-09 22:55:57.123797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:30.043 [2024-12-09 22:55:57.254025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.043 [2024-12-09 22:55:57.254074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.984 Running I/O for 5 seconds... 00:14:32.857 21120.00 IOPS, 82.50 MiB/s [2024-12-09T22:56:01.579Z] 20928.00 IOPS, 81.75 MiB/s [2024-12-09T22:56:02.516Z] 21632.00 IOPS, 84.50 MiB/s [2024-12-09T22:56:03.451Z] 21248.00 IOPS, 83.00 MiB/s [2024-12-09T22:56:03.451Z] 20403.20 IOPS, 79.70 MiB/s 00:14:36.115 Latency(us) 00:14:36.115 [2024-12-09T22:56:03.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.115 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x0 length 0xbd0bd 00:14:36.115 Nvme0n1 : 5.06 1619.50 6.33 0.00 0.00 78690.30 17476.27 82538.51 00:14:36.115 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:36.115 Nvme0n1 : 5.08 1738.82 6.79 0.00 0.00 72805.70 14739.02 65693.92 00:14:36.115 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x0 length 0xa0000 00:14:36.115 Nvme1n1 : 5.08 1626.06 6.35 0.00 0.00 78366.24 6737.84 82538.51 00:14:36.115 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0xa0000 length 0xa0000 00:14:36.115 Nvme1n1 : 5.08 1738.42 6.79 0.00 0.00 72764.52 12264.97 66536.15 00:14:36.115 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x0 length 0x80000 00:14:36.115 Nvme2n1 : 5.08 1624.85 6.35 0.00 0.00 78287.61 10159.40 82538.51 00:14:36.115 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x80000 length 0x80000 00:14:36.115 Nvme2n1 : 5.07 1741.21 6.80 0.00 0.00 73346.08 15791.81 70747.30 00:14:36.115 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x0 length 0x80000 00:14:36.115 Nvme2n2 : 5.08 1624.41 6.35 0.00 0.00 78221.57 9580.36 83380.74 00:14:36.115 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x80000 length 0x80000 00:14:36.115 Nvme2n2 : 5.07 1740.69 6.80 0.00 0.00 73076.22 15160.13 62325.00 00:14:36.115 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x0 length 0x80000 00:14:36.115 Nvme2n3 : 5.08 1624.03 6.34 0.00 0.00 78124.34 9264.53 82538.51 00:14:36.115 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x80000 length 0x80000 00:14:36.115 Nvme2n3 : 5.08 1740.19 6.80 0.00 0.00 72942.34 15370.69 64851.69 00:14:36.115 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x0 length 0x20000 00:14:36.115 Nvme3n1 : 5.08 1623.63 6.34 0.00 0.00 78027.54 9001.33 81275.17 00:14:36.115 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:36.115 Verification LBA range: start 0x20000 length 0x20000 00:14:36.115 Nvme3n1 : 5.08 1739.61 6.80 0.00 0.00 72851.30 15160.13 64009.46 00:14:36.115 [2024-12-09T22:56:03.451Z] =================================================================================================================== 00:14:36.115 [2024-12-09T22:56:03.451Z] Total : 20181.42 78.83 0.00 0.00 75533.65 6737.84 83380.74 00:14:37.492 00:14:37.492 real 0m7.836s 00:14:37.492 user 0m14.419s 00:14:37.492 sys 0m0.358s 00:14:37.492 22:56:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.492 22:56:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:37.492 ************************************ 00:14:37.492 END TEST bdev_verify 00:14:37.492 ************************************ 00:14:37.492 22:56:04 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:37.492 22:56:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:37.492 22:56:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.492 22:56:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.492 ************************************ 00:14:37.492 START TEST bdev_verify_big_io 00:14:37.492 ************************************ 00:14:37.492 22:56:04 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:37.751 [2024-12-09 22:56:04.840161] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:37.751 [2024-12-09 22:56:04.840290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61938 ] 00:14:37.751 [2024-12-09 22:56:05.021008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:38.014 [2024-12-09 22:56:05.158238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.014 [2024-12-09 22:56:05.158270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.949 Running I/O for 5 seconds... 00:14:42.891 736.00 IOPS, 46.00 MiB/s [2024-12-09T22:56:11.625Z] 1743.50 IOPS, 108.97 MiB/s [2024-12-09T22:56:11.882Z] 2642.67 IOPS, 165.17 MiB/s [2024-12-09T22:56:12.141Z] 2736.75 IOPS, 171.05 MiB/s 00:14:44.805 Latency(us) 00:14:44.805 [2024-12-09T22:56:12.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.805 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x0 length 0xbd0b 00:14:44.805 Nvme0n1 : 5.55 149.95 9.37 0.00 0.00 829196.16 26951.35 923083.77 00:14:44.805 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:44.805 Nvme0n1 : 5.58 149.05 9.32 0.00 0.00 827735.17 32215.29 916345.93 00:14:44.805 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x0 length 0xa000 00:14:44.805 Nvme1n1 : 5.55 149.88 9.37 0.00 0.00 806495.69 73695.10 774851.34 00:14:44.805 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0xa000 length 0xa000 00:14:44.805 Nvme1n1 : 5.73 152.62 9.54 0.00 0.00 791273.14 73273.99 774851.34 00:14:44.805 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x0 length 0x8000 00:14:44.805 Nvme2n1 : 5.66 154.53 9.66 0.00 0.00 759905.01 44217.06 737793.23 00:14:44.805 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x8000 length 0x8000 00:14:44.805 Nvme2n1 : 5.73 152.40 9.53 0.00 0.00 769997.44 73695.10 751268.91 00:14:44.805 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x0 length 0x8000 00:14:44.805 Nvme2n2 : 5.66 158.17 9.89 0.00 0.00 725960.19 64430.57 758006.75 00:14:44.805 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x8000 length 0x8000 00:14:44.805 Nvme2n2 : 5.74 156.17 9.76 0.00 0.00 736962.75 74116.22 778220.26 00:14:44.805 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x0 length 0x8000 00:14:44.805 Nvme2n3 : 5.82 171.89 10.74 0.00 0.00 652566.96 39795.35 784958.10 00:14:44.805 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.805 Verification LBA range: start 0x8000 length 0x8000 00:14:44.806 Nvme2n3 : 5.81 165.23 10.33 0.00 0.00 680864.65 30320.27 801802.69 00:14:44.806 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:44.806 Verification LBA range: start 0x0 length 0x2000 00:14:44.806 Nvme3n1 : 5.82 180.48 11.28 0.00 0.00 606015.87 2974.12 1246499.98 00:14:44.806 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:44.806 Verification LBA range: start 0x2000 length 0x2000 00:14:44.806 Nvme3n1 : 5.82 175.89 10.99 0.00 0.00 624023.20 3145.20 828754.04 00:14:44.806 [2024-12-09T22:56:12.142Z] =================================================================================================================== 00:14:44.806 [2024-12-09T22:56:12.142Z] Total : 1916.26 119.77 0.00 0.00 728302.58 2974.12 1246499.98 00:14:46.709 00:14:46.709 real 0m9.088s 00:14:46.709 user 0m16.890s 00:14:46.709 sys 0m0.388s 00:14:46.709 22:56:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.709 22:56:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.709 ************************************ 00:14:46.709 END TEST bdev_verify_big_io 00:14:46.709 ************************************ 00:14:46.709 22:56:13 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:46.709 22:56:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:46.709 22:56:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.709 22:56:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.709 ************************************ 00:14:46.709 START TEST bdev_write_zeroes 00:14:46.709 ************************************ 00:14:46.709 22:56:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:46.709 [2024-12-09 22:56:14.006473] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:46.709 [2024-12-09 22:56:14.006614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62058 ] 00:14:46.968 [2024-12-09 22:56:14.190438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.226 [2024-12-09 22:56:14.326326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.795 Running I/O for 1 seconds... 00:14:49.169 67904.00 IOPS, 265.25 MiB/s 00:14:49.169 Latency(us) 00:14:49.169 [2024-12-09T22:56:16.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.169 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:49.169 Nvme0n1 : 1.02 11281.09 44.07 0.00 0.00 11316.17 9159.25 27583.02 00:14:49.169 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:49.169 Nvme1n1 : 1.02 11269.82 44.02 0.00 0.00 11314.29 9475.08 29688.60 00:14:49.169 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:49.169 Nvme2n1 : 1.02 11259.09 43.98 0.00 0.00 11279.35 9106.61 29056.93 00:14:49.169 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:49.169 Nvme2n2 : 1.03 11300.02 44.14 0.00 0.00 11183.84 5921.93 23582.43 00:14:49.169 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:49.169 Nvme2n3 : 1.03 11290.12 44.10 0.00 0.00 11151.22 5948.25 22634.92 00:14:49.169 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:49.169 Nvme3n1 : 1.03 11218.00 43.82 0.00 0.00 11186.38 6106.17 29267.48 00:14:49.169 [2024-12-09T22:56:16.505Z] =================================================================================================================== 00:14:49.169 [2024-12-09T22:56:16.505Z] Total : 67618.14 264.13 0.00 0.00 11238.41 5921.93 29688.60 00:14:50.123 00:14:50.123 real 0m3.448s 00:14:50.123 user 0m3.007s 00:14:50.123 sys 0m0.324s 00:14:50.123 22:56:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.123 22:56:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:50.123 ************************************ 00:14:50.123 END TEST bdev_write_zeroes 00:14:50.123 ************************************ 00:14:50.123 22:56:17 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:50.123 22:56:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:50.123 22:56:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.123 22:56:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.123 ************************************ 00:14:50.123 START TEST bdev_json_nonenclosed 00:14:50.123 ************************************ 00:14:50.123 22:56:17 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:50.382 [2024-12-09 22:56:17.527132] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:50.382 [2024-12-09 22:56:17.527262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:14:50.382 [2024-12-09 22:56:17.707904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.641 [2024-12-09 22:56:17.842743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.641 [2024-12-09 22:56:17.842856] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:50.641 [2024-12-09 22:56:17.842879] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:50.641 [2024-12-09 22:56:17.842892] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:50.900 00:14:50.900 real 0m0.686s 00:14:50.900 user 0m0.428s 00:14:50.900 sys 0m0.153s 00:14:50.900 22:56:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.900 22:56:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:50.900 ************************************ 00:14:50.900 END TEST bdev_json_nonenclosed 00:14:50.900 ************************************ 00:14:50.900 22:56:18 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:50.900 22:56:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:14:50.900 22:56:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.900 22:56:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:50.900 ************************************ 00:14:50.900 START TEST bdev_json_nonarray 00:14:50.900 ************************************ 00:14:50.900 22:56:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:51.158 [2024-12-09 22:56:18.287753] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:51.158 [2024-12-09 22:56:18.287890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62142 ] 00:14:51.158 [2024-12-09 22:56:18.468766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.417 [2024-12-09 22:56:18.602948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.417 [2024-12-09 22:56:18.603065] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:51.417 [2024-12-09 22:56:18.603088] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:51.417 [2024-12-09 22:56:18.603101] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:51.675 00:14:51.675 real 0m0.690s 00:14:51.675 user 0m0.425s 00:14:51.675 sys 0m0.160s 00:14:51.675 22:56:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.676 22:56:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:51.676 ************************************ 00:14:51.676 END TEST bdev_json_nonarray 00:14:51.676 ************************************ 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:14:51.676 22:56:18 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:14:51.676 00:14:51.676 real 0m44.602s 00:14:51.676 user 1m5.427s 00:14:51.676 sys 0m8.282s 00:14:51.676 22:56:18 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.676 22:56:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.676 ************************************ 00:14:51.676 END TEST blockdev_nvme 00:14:51.676 ************************************ 00:14:51.934 22:56:19 -- spdk/autotest.sh@209 -- # uname -s 00:14:51.934 22:56:19 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:14:51.934 22:56:19 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:14:51.934 22:56:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:51.934 22:56:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.934 22:56:19 -- common/autotest_common.sh@10 -- # set +x 00:14:51.934 ************************************ 00:14:51.934 START TEST blockdev_nvme_gpt 00:14:51.934 ************************************ 00:14:51.934 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:14:51.934 * Looking for test storage... 00:14:51.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:51.934 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:51.934 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:14:51.934 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:51.934 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.934 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.935 22:56:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:14:51.935 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.935 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.935 --rc genhtml_branch_coverage=1 00:14:51.935 --rc genhtml_function_coverage=1 00:14:51.935 --rc genhtml_legend=1 00:14:51.935 --rc geninfo_all_blocks=1 00:14:51.935 --rc geninfo_unexecuted_blocks=1 00:14:51.935 00:14:51.935 ' 00:14:51.935 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.935 --rc genhtml_branch_coverage=1 00:14:51.935 --rc genhtml_function_coverage=1 00:14:51.935 --rc genhtml_legend=1 00:14:51.935 --rc geninfo_all_blocks=1 00:14:51.935 --rc geninfo_unexecuted_blocks=1 00:14:51.935 00:14:51.935 ' 00:14:51.935 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.935 --rc genhtml_branch_coverage=1 00:14:51.935 --rc genhtml_function_coverage=1 00:14:51.935 --rc genhtml_legend=1 00:14:51.935 --rc geninfo_all_blocks=1 00:14:51.935 --rc geninfo_unexecuted_blocks=1 00:14:51.935 00:14:51.935 ' 00:14:51.935 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:51.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.935 --rc genhtml_branch_coverage=1 00:14:51.935 --rc genhtml_function_coverage=1 00:14:51.935 --rc genhtml_legend=1 00:14:51.935 --rc geninfo_all_blocks=1 00:14:51.935 --rc geninfo_unexecuted_blocks=1 00:14:51.935 00:14:51.935 ' 00:14:51.935 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:51.935 22:56:19 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:14:51.935 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:51.935 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62226 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:52.194 22:56:19 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62226 00:14:52.194 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62226 ']' 00:14:52.194 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.194 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.194 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.194 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.194 22:56:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:14:52.194 [2024-12-09 22:56:19.403322] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:14:52.194 [2024-12-09 22:56:19.403486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62226 ] 00:14:52.453 [2024-12-09 22:56:19.587468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.453 [2024-12-09 22:56:19.729703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.392 22:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.392 22:56:20 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:14:53.392 22:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:14:53.392 22:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:14:53.392 22:56:20 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:53.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:54.216 Waiting for block devices as requested 00:14:54.475 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:54.475 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:54.734 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:54.734 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:00.142 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:00.142 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:15:00.142 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:15:00.143 BYT; 00:15:00.143 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:15:00.143 BYT; 00:15:00.143 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:15:00.143 22:56:27 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:15:00.143 22:56:27 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:15:01.079 The operation has completed successfully. 00:15:01.079 22:56:28 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:15:02.015 The operation has completed successfully. 00:15:02.015 22:56:29 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:02.951 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:03.517 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.517 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.517 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.517 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:03.776 22:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:15:03.776 22:56:30 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.776 22:56:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:03.776 [] 00:15:03.776 22:56:30 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.776 22:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:15:03.776 22:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:15:03.776 22:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:15:03.776 22:56:30 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:03.776 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:15:03.776 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.776 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.035 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.035 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:15:04.035 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.035 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.035 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:04.295 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.295 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:04.295 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.295 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:04.295 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.295 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:04.295 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:04.295 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:04.295 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.295 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:04.295 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.295 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:04.295 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:04.296 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "bc950781-7ab7-47c0-b2b1-f8f2ae39dbb1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "bc950781-7ab7-47c0-b2b1-f8f2ae39dbb1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3de49a53-8b1d-460e-9b90-db8d643164ce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3de49a53-8b1d-460e-9b90-db8d643164ce",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8173ad67-0b65-4cdb-a53a-c2ce133631a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8173ad67-0b65-4cdb-a53a-c2ce133631a4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e41604cd-4681-4a89-8b10-17f3dd3a1c7b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e41604cd-4681-4a89-8b10-17f3dd3a1c7b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "459a9f02-e42a-4285-bf16-13dc1bb5c794"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "459a9f02-e42a-4285-bf16-13dc1bb5c794",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:15:04.296 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:04.296 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:15:04.296 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:04.296 22:56:31 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62226 00:15:04.296 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62226 ']' 00:15:04.296 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62226 00:15:04.296 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:15:04.296 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.296 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62226 00:15:04.623 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.623 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.623 killing process with pid 62226 00:15:04.623 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62226' 00:15:04.623 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62226 00:15:04.623 22:56:31 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62226 00:15:07.159 22:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:07.159 22:56:34 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:15:07.159 22:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:07.159 22:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.159 22:56:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:07.159 ************************************ 00:15:07.159 START TEST bdev_hello_world 00:15:07.159 ************************************ 00:15:07.159 22:56:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:15:07.159 [2024-12-09 22:56:34.243198] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:07.159 [2024-12-09 22:56:34.243347] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62875 ] 00:15:07.159 [2024-12-09 22:56:34.408601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.418 [2024-12-09 22:56:34.541971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.985 [2024-12-09 22:56:35.253853] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:07.985 [2024-12-09 22:56:35.253910] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:15:07.985 [2024-12-09 22:56:35.253940] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:07.985 [2024-12-09 22:56:35.257085] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:07.985 [2024-12-09 22:56:35.257630] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:07.985 [2024-12-09 22:56:35.257787] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:07.985 [2024-12-09 22:56:35.258031] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:07.985 00:15:07.985 [2024-12-09 22:56:35.258056] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:09.363 00:15:09.363 real 0m2.339s 00:15:09.363 user 0m1.905s 00:15:09.363 sys 0m0.323s 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:09.363 ************************************ 00:15:09.363 END TEST bdev_hello_world 00:15:09.363 ************************************ 00:15:09.363 22:56:36 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:09.363 22:56:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:09.363 22:56:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.363 22:56:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:09.363 ************************************ 00:15:09.363 START TEST bdev_bounds 00:15:09.363 ************************************ 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62924 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:09.363 Process bdevio pid: 62924 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62924' 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62924 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62924 ']' 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:09.363 22:56:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:09.363 [2024-12-09 22:56:36.657982] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:09.363 [2024-12-09 22:56:36.658128] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62924 ] 00:15:09.622 [2024-12-09 22:56:36.844599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:09.902 [2024-12-09 22:56:36.978061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.902 [2024-12-09 22:56:36.978217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.902 [2024-12-09 22:56:36.978249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.471 22:56:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.471 22:56:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:10.471 22:56:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:10.730 I/O targets: 00:15:10.730 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:10.730 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:15:10.730 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:15:10.730 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:10.730 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:10.730 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:10.730 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:10.730 00:15:10.730 00:15:10.730 CUnit - A unit testing framework for C - Version 2.1-3 00:15:10.730 http://cunit.sourceforge.net/ 00:15:10.730 00:15:10.730 00:15:10.730 Suite: bdevio tests on: Nvme3n1 00:15:10.730 Test: blockdev write read block ...passed 00:15:10.730 Test: blockdev write zeroes read block ...passed 00:15:10.730 Test: blockdev write zeroes read no split ...passed 00:15:10.730 Test: blockdev write zeroes read split ...passed 00:15:10.730 Test: blockdev write zeroes read split partial ...passed 00:15:10.730 Test: blockdev reset ...[2024-12-09 22:56:37.890659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:15:10.730 [2024-12-09 22:56:37.894803] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:15:10.730 passed 00:15:10.730 Test: blockdev write read 8 blocks ...passed 00:15:10.730 Test: blockdev write read size > 128k ...passed 00:15:10.730 Test: blockdev write read invalid size ...passed 00:15:10.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.730 Test: blockdev write read max offset ...passed 00:15:10.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.730 Test: blockdev writev readv 8 blocks ...passed 00:15:10.730 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.730 Test: blockdev writev readv block ...passed 00:15:10.730 Test: blockdev writev readv size > 128k ...passed 00:15:10.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.730 Test: blockdev comparev and writev ...[2024-12-09 22:56:37.904421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bbe04000 len:0x1000 00:15:10.730 [2024-12-09 22:56:37.904485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:10.730 passed 00:15:10.730 Test: blockdev nvme passthru rw ...passed 00:15:10.730 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.730 Test: blockdev nvme admin passthru ...[2024-12-09 22:56:37.905445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:10.730 [2024-12-09 22:56:37.905500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:10.730 passed 00:15:10.730 Test: blockdev copy ...passed 00:15:10.730 Suite: bdevio tests on: Nvme2n3 00:15:10.730 Test: blockdev write read block ...passed 00:15:10.730 Test: blockdev write zeroes read block ...passed 00:15:10.730 Test: blockdev write zeroes read no split ...passed 00:15:10.730 Test: blockdev write zeroes read split ...passed 00:15:10.730 Test: blockdev write zeroes read split partial ...passed 00:15:10.730 Test: blockdev reset ...[2024-12-09 22:56:37.979630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:15:10.730 [2024-12-09 22:56:37.984099] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:15:10.730 Test: blockdev write read 8 blocks ...uccessful. 00:15:10.730 passed 00:15:10.730 Test: blockdev write read size > 128k ...passed 00:15:10.730 Test: blockdev write read invalid size ...passed 00:15:10.730 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.730 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.730 Test: blockdev write read max offset ...passed 00:15:10.730 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.730 Test: blockdev writev readv 8 blocks ...passed 00:15:10.730 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.730 Test: blockdev writev readv block ...passed 00:15:10.730 Test: blockdev writev readv size > 128k ...passed 00:15:10.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.730 Test: blockdev comparev and writev ...[2024-12-09 22:56:37.993432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bbe02000 len:0x1000 00:15:10.730 [2024-12-09 22:56:37.993523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:10.730 passed 00:15:10.730 Test: blockdev nvme passthru rw ...passed 00:15:10.730 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.730 Test: blockdev nvme admin passthru ...[2024-12-09 22:56:37.994356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:10.730 [2024-12-09 22:56:37.994409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:10.730 passed 00:15:10.730 Test: blockdev copy ...passed 00:15:10.730 Suite: bdevio tests on: Nvme2n2 00:15:10.730 Test: blockdev write read block ...passed 00:15:10.730 Test: blockdev write zeroes read block ...passed 00:15:10.730 Test: blockdev write zeroes read no split ...passed 00:15:10.989 Test: blockdev write zeroes read split ...passed 00:15:10.989 Test: blockdev write zeroes read split partial ...passed 00:15:10.989 Test: blockdev reset ...[2024-12-09 22:56:38.109949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:15:10.989 [2024-12-09 22:56:38.114474] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:15:10.989 passed 00:15:10.989 Test: blockdev write read 8 blocks ...passed 00:15:10.989 Test: blockdev write read size > 128k ...passed 00:15:10.989 Test: blockdev write read invalid size ...passed 00:15:10.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.989 Test: blockdev write read max offset ...passed 00:15:10.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.989 Test: blockdev writev readv 8 blocks ...passed 00:15:10.989 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.989 Test: blockdev writev readv block ...passed 00:15:10.989 Test: blockdev writev readv size > 128k ...passed 00:15:10.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.989 Test: blockdev comparev and writev ...[2024-12-09 22:56:38.123603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc38000 len:0x1000 00:15:10.989 [2024-12-09 22:56:38.123660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:10.989 passed 00:15:10.989 Test: blockdev nvme passthru rw ...passed 00:15:10.989 Test: blockdev nvme passthru vendor specific ...[2024-12-09 22:56:38.124572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:15:10.989 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:15:10.989 [2024-12-09 22:56:38.124721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:10.989 passed 00:15:10.989 Test: blockdev copy ...passed 00:15:10.989 Suite: bdevio tests on: Nvme2n1 00:15:10.989 Test: blockdev write read block ...passed 00:15:10.989 Test: blockdev write zeroes read block ...passed 00:15:10.989 Test: blockdev write zeroes read no split ...passed 00:15:10.989 Test: blockdev write zeroes read split ...passed 00:15:10.989 Test: blockdev write zeroes read split partial ...passed 00:15:10.989 Test: blockdev reset ...[2024-12-09 22:56:38.201917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:15:10.989 [2024-12-09 22:56:38.206394] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:15:10.989 passed 00:15:10.989 Test: blockdev write read 8 blocks ...passed 00:15:10.989 Test: blockdev write read size > 128k ...passed 00:15:10.989 Test: blockdev write read invalid size ...passed 00:15:10.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.989 Test: blockdev write read max offset ...passed 00:15:10.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.989 Test: blockdev writev readv 8 blocks ...passed 00:15:10.989 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.989 Test: blockdev writev readv block ...passed 00:15:10.989 Test: blockdev writev readv size > 128k ...passed 00:15:10.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.989 Test: blockdev comparev and writev ...[2024-12-09 22:56:38.216335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc34000 len:0x1000 00:15:10.989 [2024-12-09 22:56:38.216539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:10.989 passed 00:15:10.989 Test: blockdev nvme passthru rw ...passed 00:15:10.989 Test: blockdev nvme passthru vendor specific ...[2024-12-09 22:56:38.217809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:10.989 [2024-12-09 22:56:38.217957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:15:10.989 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:15:10.989 passed 00:15:10.989 Test: blockdev copy ...passed 00:15:10.989 Suite: bdevio tests on: Nvme1n1p2 00:15:10.989 Test: blockdev write read block ...passed 00:15:10.989 Test: blockdev write zeroes read block ...passed 00:15:10.989 Test: blockdev write zeroes read no split ...passed 00:15:10.989 Test: blockdev write zeroes read split ...passed 00:15:10.989 Test: blockdev write zeroes read split partial ...passed 00:15:10.989 Test: blockdev reset ...[2024-12-09 22:56:38.299030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:15:10.989 passed 00:15:10.989 Test: blockdev write read 8 blocks ...[2024-12-09 22:56:38.302858] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:15:10.989 passed 00:15:10.989 Test: blockdev write read size > 128k ...passed 00:15:10.989 Test: blockdev write read invalid size ...passed 00:15:10.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:10.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:10.989 Test: blockdev write read max offset ...passed 00:15:10.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:10.989 Test: blockdev writev readv 8 blocks ...passed 00:15:10.989 Test: blockdev writev readv 30 x 1block ...passed 00:15:10.989 Test: blockdev writev readv block ...passed 00:15:10.990 Test: blockdev writev readv size > 128k ...passed 00:15:10.990 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:10.990 Test: blockdev comparev and writev ...[2024-12-09 22:56:38.311461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cfc30000 len:0x1000 00:15:10.990 [2024-12-09 22:56:38.311511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:10.990 passed 00:15:10.990 Test: blockdev nvme passthru rw ...passed 00:15:10.990 Test: blockdev nvme passthru vendor specific ...passed 00:15:10.990 Test: blockdev nvme admin passthru ...passed 00:15:10.990 Test: blockdev copy ...passed 00:15:10.990 Suite: bdevio tests on: Nvme1n1p1 00:15:10.990 Test: blockdev write read block ...passed 00:15:10.990 Test: blockdev write zeroes read block ...passed 00:15:10.990 Test: blockdev write zeroes read no split ...passed 00:15:11.249 Test: blockdev write zeroes read split ...passed 00:15:11.249 Test: blockdev write zeroes read split partial ...passed 00:15:11.249 Test: blockdev reset ...[2024-12-09 22:56:38.376696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:15:11.249 passed 00:15:11.249 Test: blockdev write read 8 blocks ...[2024-12-09 22:56:38.380632] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:15:11.249 passed 00:15:11.249 Test: blockdev write read size > 128k ...passed 00:15:11.249 Test: blockdev write read invalid size ...passed 00:15:11.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:11.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:11.249 Test: blockdev write read max offset ...passed 00:15:11.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:11.249 Test: blockdev writev readv 8 blocks ...passed 00:15:11.249 Test: blockdev writev readv 30 x 1block ...passed 00:15:11.249 Test: blockdev writev readv block ...passed 00:15:11.249 Test: blockdev writev readv size > 128k ...passed 00:15:11.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:11.249 Test: blockdev comparev and writev ...[2024-12-09 22:56:38.389681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bc00e000 len:0x1000 00:15:11.249 [2024-12-09 22:56:38.389731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:11.249 passed 00:15:11.249 Test: blockdev nvme passthru rw ...passed 00:15:11.249 Test: blockdev nvme passthru vendor specific ...passed 00:15:11.249 Test: blockdev nvme admin passthru ...passed 00:15:11.249 Test: blockdev copy ...passed 00:15:11.249 Suite: bdevio tests on: Nvme0n1 00:15:11.249 Test: blockdev write read block ...passed 00:15:11.249 Test: blockdev write zeroes read block ...passed 00:15:11.249 Test: blockdev write zeroes read no split ...passed 00:15:11.249 Test: blockdev write zeroes read split ...passed 00:15:11.249 Test: blockdev write zeroes read split partial ...passed 00:15:11.249 Test: blockdev reset ...[2024-12-09 22:56:38.460244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:15:11.249 [2024-12-09 22:56:38.464153] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:15:11.249 Test: blockdev write read 8 blocks ...uccessful. 00:15:11.249 passed 00:15:11.249 Test: blockdev write read size > 128k ...passed 00:15:11.250 Test: blockdev write read invalid size ...passed 00:15:11.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:11.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:11.250 Test: blockdev write read max offset ...passed 00:15:11.250 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:11.250 Test: blockdev writev readv 8 blocks ...passed 00:15:11.250 Test: blockdev writev readv 30 x 1block ...passed 00:15:11.250 Test: blockdev writev readv block ...passed 00:15:11.250 Test: blockdev writev readv size > 128k ...passed 00:15:11.250 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:11.250 Test: blockdev comparev and writev ...passed 00:15:11.250 Test: blockdev nvme passthru rw ...[2024-12-09 22:56:38.473277] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:15:11.250 separate metadata which is not supported yet. 00:15:11.250 passed 00:15:11.250 Test: blockdev nvme passthru vendor specific ...passed 00:15:11.250 Test: blockdev nvme admin passthru ...[2024-12-09 22:56:38.474137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:15:11.250 [2024-12-09 22:56:38.474207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:15:11.250 passed 00:15:11.250 Test: blockdev copy ...passed 00:15:11.250 00:15:11.250 Run Summary: Type Total Ran Passed Failed Inactive 00:15:11.250 suites 7 7 n/a 0 0 00:15:11.250 tests 161 161 161 0 0 00:15:11.250 asserts 1025 1025 1025 0 n/a 00:15:11.250 00:15:11.250 Elapsed time = 1.766 seconds 00:15:11.250 0 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62924 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62924 ']' 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62924 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62924 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62924' 00:15:11.250 killing process with pid 62924 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62924 00:15:11.250 22:56:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62924 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:12.629 00:15:12.629 real 0m3.127s 00:15:12.629 user 0m7.956s 00:15:12.629 sys 0m0.494s 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.629 ************************************ 00:15:12.629 END TEST bdev_bounds 00:15:12.629 ************************************ 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:12.629 22:56:39 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:15:12.629 22:56:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:12.629 22:56:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.629 22:56:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:12.629 ************************************ 00:15:12.629 START TEST bdev_nbd 00:15:12.629 ************************************ 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62989 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:12.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62989 /var/tmp/spdk-nbd.sock 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62989 ']' 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.629 22:56:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:12.629 [2024-12-09 22:56:39.870849] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:12.629 [2024-12-09 22:56:39.870984] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.888 [2024-12-09 22:56:40.057729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.888 [2024-12-09 22:56:40.183499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.824 22:56:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:13.825 22:56:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.084 1+0 records in 00:15:14.084 1+0 records out 00:15:14.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071609 s, 5.7 MB/s 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:14.084 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.342 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.343 1+0 records in 00:15:14.343 1+0 records out 00:15:14.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636845 s, 6.4 MB/s 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:14.343 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.601 1+0 records in 00:15:14.601 1+0 records out 00:15:14.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793555 s, 5.2 MB/s 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:14.601 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.861 22:56:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.861 1+0 records in 00:15:14.861 1+0 records out 00:15:14.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000905802 s, 4.5 MB/s 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:14.861 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.121 1+0 records in 00:15:15.121 1+0 records out 00:15:15.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927323 s, 4.4 MB/s 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:15.121 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.381 1+0 records in 00:15:15.381 1+0 records out 00:15:15.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080798 s, 5.1 MB/s 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:15.381 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:15.645 1+0 records in 00:15:15.645 1+0 records out 00:15:15.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866219 s, 4.7 MB/s 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:15:15.645 22:56:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:15.914 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:15.914 { 00:15:15.914 "nbd_device": "/dev/nbd0", 00:15:15.914 "bdev_name": "Nvme0n1" 00:15:15.914 }, 00:15:15.914 { 00:15:15.914 "nbd_device": "/dev/nbd1", 00:15:15.914 "bdev_name": "Nvme1n1p1" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd2", 00:15:15.915 "bdev_name": "Nvme1n1p2" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd3", 00:15:15.915 "bdev_name": "Nvme2n1" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd4", 00:15:15.915 "bdev_name": "Nvme2n2" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd5", 00:15:15.915 "bdev_name": "Nvme2n3" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd6", 00:15:15.915 "bdev_name": "Nvme3n1" 00:15:15.915 } 00:15:15.915 ]' 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd0", 00:15:15.915 "bdev_name": "Nvme0n1" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd1", 00:15:15.915 "bdev_name": "Nvme1n1p1" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd2", 00:15:15.915 "bdev_name": "Nvme1n1p2" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd3", 00:15:15.915 "bdev_name": "Nvme2n1" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd4", 00:15:15.915 "bdev_name": "Nvme2n2" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd5", 00:15:15.915 "bdev_name": "Nvme2n3" 00:15:15.915 }, 00:15:15.915 { 00:15:15.915 "nbd_device": "/dev/nbd6", 00:15:15.915 "bdev_name": "Nvme3n1" 00:15:15.915 } 00:15:15.915 ]' 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:15.915 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.174 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.433 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.692 22:56:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:16.692 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:16.692 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:16.692 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.951 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.520 22:56:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:17.779 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:17.780 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:15:18.039 /dev/nbd0 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.039 1+0 records in 00:15:18.039 1+0 records out 00:15:18.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722574 s, 5.7 MB/s 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:18.039 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:15:18.298 /dev/nbd1 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.298 1+0 records in 00:15:18.298 1+0 records out 00:15:18.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697091 s, 5.9 MB/s 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:18.298 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:15:18.557 /dev/nbd10 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.557 1+0 records in 00:15:18.557 1+0 records out 00:15:18.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706917 s, 5.8 MB/s 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:18.557 22:56:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:15:18.816 /dev/nbd11 00:15:18.816 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:18.816 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.817 1+0 records in 00:15:18.817 1+0 records out 00:15:18.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619187 s, 6.6 MB/s 00:15:18.817 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:15:19.076 /dev/nbd12 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.076 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.335 1+0 records in 00:15:19.335 1+0 records out 00:15:19.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096761 s, 4.2 MB/s 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:15:19.335 /dev/nbd13 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.335 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.594 1+0 records in 00:15:19.594 1+0 records out 00:15:19.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000971279 s, 4.2 MB/s 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:19.594 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:15:19.594 /dev/nbd14 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.854 1+0 records in 00:15:19.854 1+0 records out 00:15:19.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000948224 s, 4.3 MB/s 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:19.854 22:56:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:20.117 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd0", 00:15:20.117 "bdev_name": "Nvme0n1" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd1", 00:15:20.117 "bdev_name": "Nvme1n1p1" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd10", 00:15:20.117 "bdev_name": "Nvme1n1p2" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd11", 00:15:20.117 "bdev_name": "Nvme2n1" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd12", 00:15:20.117 "bdev_name": "Nvme2n2" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd13", 00:15:20.117 "bdev_name": "Nvme2n3" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd14", 00:15:20.117 "bdev_name": "Nvme3n1" 00:15:20.117 } 00:15:20.117 ]' 00:15:20.117 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:20.117 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd0", 00:15:20.117 "bdev_name": "Nvme0n1" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd1", 00:15:20.117 "bdev_name": "Nvme1n1p1" 00:15:20.117 }, 00:15:20.117 { 00:15:20.117 "nbd_device": "/dev/nbd10", 00:15:20.117 "bdev_name": "Nvme1n1p2" 00:15:20.117 }, 00:15:20.117 { 00:15:20.118 "nbd_device": "/dev/nbd11", 00:15:20.118 "bdev_name": "Nvme2n1" 00:15:20.118 }, 00:15:20.118 { 00:15:20.118 "nbd_device": "/dev/nbd12", 00:15:20.118 "bdev_name": "Nvme2n2" 00:15:20.118 }, 00:15:20.118 { 00:15:20.118 "nbd_device": "/dev/nbd13", 00:15:20.118 "bdev_name": "Nvme2n3" 00:15:20.118 }, 00:15:20.118 { 00:15:20.118 "nbd_device": "/dev/nbd14", 00:15:20.118 "bdev_name": "Nvme3n1" 00:15:20.118 } 00:15:20.118 ]' 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:20.118 /dev/nbd1 00:15:20.118 /dev/nbd10 00:15:20.118 /dev/nbd11 00:15:20.118 /dev/nbd12 00:15:20.118 /dev/nbd13 00:15:20.118 /dev/nbd14' 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:20.118 /dev/nbd1 00:15:20.118 /dev/nbd10 00:15:20.118 /dev/nbd11 00:15:20.118 /dev/nbd12 00:15:20.118 /dev/nbd13 00:15:20.118 /dev/nbd14' 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:20.118 256+0 records in 00:15:20.118 256+0 records out 00:15:20.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148486 s, 70.6 MB/s 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:20.118 256+0 records in 00:15:20.118 256+0 records out 00:15:20.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145055 s, 7.2 MB/s 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.118 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:20.383 256+0 records in 00:15:20.383 256+0 records out 00:15:20.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148327 s, 7.1 MB/s 00:15:20.383 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.383 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:20.642 256+0 records in 00:15:20.642 256+0 records out 00:15:20.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14599 s, 7.2 MB/s 00:15:20.642 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.642 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:20.642 256+0 records in 00:15:20.642 256+0 records out 00:15:20.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143326 s, 7.3 MB/s 00:15:20.642 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.642 22:56:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:20.900 256+0 records in 00:15:20.900 256+0 records out 00:15:20.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143812 s, 7.3 MB/s 00:15:20.900 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.900 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:20.900 256+0 records in 00:15:20.900 256+0 records out 00:15:20.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163789 s, 6.4 MB/s 00:15:20.901 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:20.901 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:15:21.159 256+0 records in 00:15:21.159 256+0 records out 00:15:21.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141969 s, 7.4 MB/s 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:21.159 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.160 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.418 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.677 22:56:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:21.936 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.195 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.454 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:22.714 22:56:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:22.973 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:23.231 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:23.490 malloc_lvol_verify 00:15:23.490 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:23.750 59477b83-fd4d-4c2c-8f6c-0389609132a2 00:15:23.750 22:56:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:23.750 55f195eb-4baa-4cc1-9d68-130493db5972 00:15:24.009 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:24.009 /dev/nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:24.268 mke2fs 1.47.0 (5-Feb-2023) 00:15:24.268 Discarding device blocks: 0/4096 done 00:15:24.268 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:24.268 00:15:24.268 Allocating group tables: 0/1 done 00:15:24.268 Writing inode tables: 0/1 done 00:15:24.268 Creating journal (1024 blocks): done 00:15:24.268 Writing superblocks and filesystem accounting information: 0/1 done 00:15:24.268 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:24.268 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62989 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62989 ']' 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62989 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62989 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.546 killing process with pid 62989 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62989' 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62989 00:15:24.546 22:56:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62989 00:15:25.925 22:56:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:25.925 00:15:25.925 real 0m13.224s 00:15:25.925 user 0m16.962s 00:15:25.925 sys 0m5.584s 00:15:25.925 22:56:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.925 22:56:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 ************************************ 00:15:25.925 END TEST bdev_nbd 00:15:25.925 ************************************ 00:15:25.925 22:56:53 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:25.925 22:56:53 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:15:25.925 22:56:53 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:15:25.925 skipping fio tests on NVMe due to multi-ns failures. 00:15:25.925 22:56:53 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:15:25.925 22:56:53 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:25.925 22:56:53 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:25.925 22:56:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:25.925 22:56:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.925 22:56:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:25.925 ************************************ 00:15:25.925 START TEST bdev_verify 00:15:25.925 ************************************ 00:15:25.925 22:56:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:25.925 [2024-12-09 22:56:53.164600] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:25.925 [2024-12-09 22:56:53.164747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63424 ] 00:15:26.184 [2024-12-09 22:56:53.351341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:26.184 [2024-12-09 22:56:53.488271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.184 [2024-12-09 22:56:53.488274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.120 Running I/O for 5 seconds... 00:15:29.433 21184.00 IOPS, 82.75 MiB/s [2024-12-09T22:56:57.703Z] 21152.00 IOPS, 82.62 MiB/s [2024-12-09T22:56:58.638Z] 21162.67 IOPS, 82.67 MiB/s [2024-12-09T22:56:59.574Z] 21376.00 IOPS, 83.50 MiB/s [2024-12-09T22:56:59.574Z] 21708.80 IOPS, 84.80 MiB/s 00:15:32.238 Latency(us) 00:15:32.238 [2024-12-09T22:56:59.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.238 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x0 length 0xbd0bd 00:15:32.238 Nvme0n1 : 5.04 1549.20 6.05 0.00 0.00 82342.19 19687.12 74537.33 00:15:32.238 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:32.238 Nvme0n1 : 5.05 1494.09 5.84 0.00 0.00 85370.02 19476.56 93066.38 00:15:32.238 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x0 length 0x4ff80 00:15:32.238 Nvme1n1p1 : 5.06 1555.01 6.07 0.00 0.00 81930.97 7474.79 67799.49 00:15:32.238 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x4ff80 length 0x4ff80 00:15:32.238 Nvme1n1p1 : 5.06 1493.63 5.83 0.00 0.00 85086.04 20318.79 90118.58 00:15:32.238 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x0 length 0x4ff7f 00:15:32.238 Nvme1n1p2 : 5.06 1554.55 6.07 0.00 0.00 81840.98 7106.31 64430.57 00:15:32.238 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:15:32.238 Nvme1n1p2 : 5.08 1500.24 5.86 0.00 0.00 84545.69 7737.99 88434.12 00:15:32.238 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x0 length 0x80000 00:15:32.238 Nvme2n1 : 5.07 1554.00 6.07 0.00 0.00 81750.11 7895.90 64851.69 00:15:32.238 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x80000 length 0x80000 00:15:32.238 Nvme2n1 : 5.08 1499.87 5.86 0.00 0.00 84411.40 7211.59 87170.78 00:15:32.238 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x0 length 0x80000 00:15:32.238 Nvme2n2 : 5.07 1553.61 6.07 0.00 0.00 81658.70 7948.54 65693.92 00:15:32.238 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x80000 length 0x80000 00:15:32.238 Nvme2n2 : 5.09 1508.91 5.89 0.00 0.00 83903.30 9633.00 88434.12 00:15:32.238 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x0 length 0x80000 00:15:32.238 Nvme2n3 : 5.08 1562.44 6.10 0.00 0.00 81239.30 9159.25 67799.49 00:15:32.238 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x80000 length 0x80000 00:15:32.238 Nvme2n3 : 5.09 1508.56 5.89 0.00 0.00 83807.92 9685.64 91381.92 00:15:32.238 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x0 length 0x20000 00:15:32.238 Nvme3n1 : 5.08 1562.03 6.10 0.00 0.00 81115.36 8896.05 68641.72 00:15:32.238 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.238 Verification LBA range: start 0x20000 length 0x20000 00:15:32.238 Nvme3n1 : 5.09 1508.23 5.89 0.00 0.00 83753.49 9843.56 93066.38 00:15:32.238 [2024-12-09T22:56:59.574Z] =================================================================================================================== 00:15:32.238 [2024-12-09T22:56:59.574Z] Total : 21404.40 83.61 0.00 0.00 83028.56 7106.31 93066.38 00:15:34.141 00:15:34.141 real 0m7.916s 00:15:34.141 user 0m14.558s 00:15:34.141 sys 0m0.370s 00:15:34.141 22:57:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.141 22:57:00 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:34.141 ************************************ 00:15:34.141 END TEST bdev_verify 00:15:34.141 ************************************ 00:15:34.141 22:57:01 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:34.141 22:57:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:34.141 22:57:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.141 22:57:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:34.141 ************************************ 00:15:34.141 START TEST bdev_verify_big_io 00:15:34.141 ************************************ 00:15:34.141 22:57:01 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:34.141 [2024-12-09 22:57:01.153439] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:34.141 [2024-12-09 22:57:01.153582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63522 ] 00:15:34.141 [2024-12-09 22:57:01.340540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:34.141 [2024-12-09 22:57:01.471171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.141 [2024-12-09 22:57:01.471203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.077 Running I/O for 5 seconds... 00:15:40.268 1581.00 IOPS, 98.81 MiB/s [2024-12-09T22:57:08.560Z] 3188.50 IOPS, 199.28 MiB/s [2024-12-09T22:57:08.560Z] 3865.33 IOPS, 241.58 MiB/s 00:15:41.224 Latency(us) 00:15:41.224 [2024-12-09T22:57:08.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.224 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x0 length 0xbd0b 00:15:41.224 Nvme0n1 : 5.73 144.94 9.06 0.00 0.00 839113.10 13580.95 1131956.74 00:15:41.224 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:41.224 Nvme0n1 : 5.55 149.27 9.33 0.00 0.00 831662.61 34320.86 889394.58 00:15:41.224 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x0 length 0x4ff8 00:15:41.224 Nvme1n1p1 : 5.64 147.56 9.22 0.00 0.00 809200.04 69483.95 943297.29 00:15:41.224 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x4ff8 length 0x4ff8 00:15:41.224 Nvme1n1p1 : 5.63 153.36 9.59 0.00 0.00 797686.34 74958.44 832122.96 00:15:41.224 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x0 length 0x4ff7 00:15:41.224 Nvme1n1p2 : 5.78 151.41 9.46 0.00 0.00 766556.82 93066.38 764744.58 00:15:41.224 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x4ff7 length 0x4ff7 00:15:41.224 Nvme1n1p2 : 5.69 152.09 9.51 0.00 0.00 780012.67 76221.79 700735.13 00:15:41.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x0 length 0x8000 00:15:41.224 Nvme2n1 : 5.81 157.03 9.81 0.00 0.00 722608.34 25688.01 970248.64 00:15:41.224 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x8000 length 0x8000 00:15:41.224 Nvme2n1 : 5.70 157.30 9.83 0.00 0.00 745337.35 56429.39 768113.50 00:15:41.224 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x0 length 0x8000 00:15:41.224 Nvme2n2 : 5.84 161.09 10.07 0.00 0.00 681406.12 25056.33 990462.15 00:15:41.224 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x8000 length 0x8000 00:15:41.224 Nvme2n2 : 5.73 160.27 10.02 0.00 0.00 715427.29 35584.21 781589.18 00:15:41.224 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x0 length 0x8000 00:15:41.224 Nvme2n3 : 5.94 181.03 11.31 0.00 0.00 596230.31 9843.56 1414945.93 00:15:41.224 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:41.224 Verification LBA range: start 0x8000 length 0x8000 00:15:41.224 Nvme2n3 : 5.76 165.90 10.37 0.00 0.00 677265.08 25582.73 889394.58 00:15:41.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:41.225 Verification LBA range: start 0x0 length 0x2000 00:15:41.225 Nvme3n1 : 5.99 216.41 13.53 0.00 0.00 487429.21 195.75 1441897.28 00:15:41.225 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:41.225 Verification LBA range: start 0x2000 length 0x2000 00:15:41.225 Nvme3n1 : 5.78 177.43 11.09 0.00 0.00 620112.78 5474.49 889394.58 00:15:41.225 [2024-12-09T22:57:08.561Z] =================================================================================================================== 00:15:41.225 [2024-12-09T22:57:08.561Z] Total : 2275.07 142.19 0.00 0.00 707228.75 195.75 1441897.28 00:15:43.754 00:15:43.754 real 0m9.464s 00:15:43.754 user 0m17.630s 00:15:43.754 sys 0m0.400s 00:15:43.754 22:57:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.754 22:57:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:43.754 ************************************ 00:15:43.754 END TEST bdev_verify_big_io 00:15:43.754 ************************************ 00:15:43.754 22:57:10 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:43.754 22:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:43.754 22:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.754 22:57:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:43.754 ************************************ 00:15:43.754 START TEST bdev_write_zeroes 00:15:43.754 ************************************ 00:15:43.755 22:57:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:43.755 [2024-12-09 22:57:10.727532] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:43.755 [2024-12-09 22:57:10.727753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63650 ] 00:15:43.755 [2024-12-09 22:57:10.913969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.755 [2024-12-09 22:57:11.052295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.689 Running I/O for 1 seconds... 00:15:45.623 62656.00 IOPS, 244.75 MiB/s 00:15:45.623 Latency(us) 00:15:45.623 [2024-12-09T22:57:12.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.623 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.623 Nvme0n1 : 1.02 8930.69 34.89 0.00 0.00 14285.18 12159.69 29899.16 00:15:45.623 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.623 Nvme1n1p1 : 1.03 8920.40 34.85 0.00 0.00 14280.76 12370.25 32215.29 00:15:45.623 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.623 Nvme1n1p2 : 1.03 8910.60 34.81 0.00 0.00 14230.94 12107.05 29267.48 00:15:45.623 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.623 Nvme2n1 : 1.03 8952.72 34.97 0.00 0.00 14105.87 7580.07 25056.33 00:15:45.623 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.623 Nvme2n2 : 1.03 8943.74 34.94 0.00 0.00 14079.65 7737.99 24003.55 00:15:45.623 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.623 Nvme2n3 : 1.03 8935.07 34.90 0.00 0.00 14052.76 7895.90 23792.99 00:15:45.623 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:45.623 Nvme3n1 : 1.03 8864.45 34.63 0.00 0.00 14138.12 8106.46 31373.06 00:15:45.623 [2024-12-09T22:57:12.959Z] =================================================================================================================== 00:15:45.623 [2024-12-09T22:57:12.959Z] Total : 62457.68 243.98 0.00 0.00 14167.35 7580.07 32215.29 00:15:47.000 00:15:47.000 real 0m3.545s 00:15:47.000 user 0m3.068s 00:15:47.000 sys 0m0.358s 00:15:47.000 22:57:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.000 22:57:14 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:47.000 ************************************ 00:15:47.000 END TEST bdev_write_zeroes 00:15:47.000 ************************************ 00:15:47.000 22:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:47.000 22:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:47.000 22:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.000 22:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:47.000 ************************************ 00:15:47.000 START TEST bdev_json_nonenclosed 00:15:47.000 ************************************ 00:15:47.000 22:57:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:47.000 [2024-12-09 22:57:14.311170] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:47.000 [2024-12-09 22:57:14.311326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63708 ] 00:15:47.260 [2024-12-09 22:57:14.495348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.519 [2024-12-09 22:57:14.624401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.519 [2024-12-09 22:57:14.624519] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:47.519 [2024-12-09 22:57:14.624545] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:47.519 [2024-12-09 22:57:14.624558] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:47.788 00:15:47.788 real 0m0.677s 00:15:47.788 user 0m0.406s 00:15:47.788 sys 0m0.165s 00:15:47.788 22:57:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.788 22:57:14 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:47.788 ************************************ 00:15:47.788 END TEST bdev_json_nonenclosed 00:15:47.788 ************************************ 00:15:47.788 22:57:14 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:47.788 22:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:47.788 22:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.788 22:57:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:47.788 ************************************ 00:15:47.788 START TEST bdev_json_nonarray 00:15:47.788 ************************************ 00:15:47.788 22:57:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:47.788 [2024-12-09 22:57:15.061991] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:47.788 [2024-12-09 22:57:15.062137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63734 ] 00:15:48.046 [2024-12-09 22:57:15.231846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.305 [2024-12-09 22:57:15.410990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.305 [2024-12-09 22:57:15.411151] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:48.305 [2024-12-09 22:57:15.411188] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:48.305 [2024-12-09 22:57:15.411209] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:48.564 00:15:48.564 real 0m0.739s 00:15:48.564 user 0m0.482s 00:15:48.564 sys 0m0.151s 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.564 ************************************ 00:15:48.564 END TEST bdev_json_nonarray 00:15:48.564 ************************************ 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:48.564 22:57:15 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:15:48.564 22:57:15 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:15:48.564 22:57:15 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:15:48.564 22:57:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:48.564 22:57:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.564 22:57:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:48.564 ************************************ 00:15:48.564 START TEST bdev_gpt_uuid 00:15:48.564 ************************************ 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63765 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63765 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63765 ']' 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:48.564 22:57:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:48.564 [2024-12-09 22:57:15.878298] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:15:48.564 [2024-12-09 22:57:15.878475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63765 ] 00:15:48.823 [2024-12-09 22:57:16.060132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.083 [2024-12-09 22:57:16.191131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.020 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.020 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:15:50.020 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:50.020 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.020 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:50.280 Some configs were skipped because the RPC state that can call them passed over. 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:15:50.280 { 00:15:50.280 "name": "Nvme1n1p1", 00:15:50.280 "aliases": [ 00:15:50.280 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:15:50.280 ], 00:15:50.280 "product_name": "GPT Disk", 00:15:50.280 "block_size": 4096, 00:15:50.280 "num_blocks": 655104, 00:15:50.280 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:15:50.280 "assigned_rate_limits": { 00:15:50.280 "rw_ios_per_sec": 0, 00:15:50.280 "rw_mbytes_per_sec": 0, 00:15:50.280 "r_mbytes_per_sec": 0, 00:15:50.280 "w_mbytes_per_sec": 0 00:15:50.280 }, 00:15:50.280 "claimed": false, 00:15:50.280 "zoned": false, 00:15:50.280 "supported_io_types": { 00:15:50.280 "read": true, 00:15:50.280 "write": true, 00:15:50.280 "unmap": true, 00:15:50.280 "flush": true, 00:15:50.280 "reset": true, 00:15:50.280 "nvme_admin": false, 00:15:50.280 "nvme_io": false, 00:15:50.280 "nvme_io_md": false, 00:15:50.280 "write_zeroes": true, 00:15:50.280 "zcopy": false, 00:15:50.280 "get_zone_info": false, 00:15:50.280 "zone_management": false, 00:15:50.280 "zone_append": false, 00:15:50.280 "compare": true, 00:15:50.280 "compare_and_write": false, 00:15:50.280 "abort": true, 00:15:50.280 "seek_hole": false, 00:15:50.280 "seek_data": false, 00:15:50.280 "copy": true, 00:15:50.280 "nvme_iov_md": false 00:15:50.280 }, 00:15:50.280 "driver_specific": { 00:15:50.280 "gpt": { 00:15:50.280 "base_bdev": "Nvme1n1", 00:15:50.280 "offset_blocks": 256, 00:15:50.280 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:15:50.280 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:15:50.280 "partition_name": "SPDK_TEST_first" 00:15:50.280 } 00:15:50.280 } 00:15:50.280 } 00:15:50.280 ]' 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:15:50.280 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:15:50.539 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:15:50.539 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:15:50.539 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.539 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:50.539 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.539 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:15:50.539 { 00:15:50.539 "name": "Nvme1n1p2", 00:15:50.539 "aliases": [ 00:15:50.539 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:15:50.539 ], 00:15:50.539 "product_name": "GPT Disk", 00:15:50.539 "block_size": 4096, 00:15:50.539 "num_blocks": 655103, 00:15:50.540 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:15:50.540 "assigned_rate_limits": { 00:15:50.540 "rw_ios_per_sec": 0, 00:15:50.540 "rw_mbytes_per_sec": 0, 00:15:50.540 "r_mbytes_per_sec": 0, 00:15:50.540 "w_mbytes_per_sec": 0 00:15:50.540 }, 00:15:50.540 "claimed": false, 00:15:50.540 "zoned": false, 00:15:50.540 "supported_io_types": { 00:15:50.540 "read": true, 00:15:50.540 "write": true, 00:15:50.540 "unmap": true, 00:15:50.540 "flush": true, 00:15:50.540 "reset": true, 00:15:50.540 "nvme_admin": false, 00:15:50.540 "nvme_io": false, 00:15:50.540 "nvme_io_md": false, 00:15:50.540 "write_zeroes": true, 00:15:50.540 "zcopy": false, 00:15:50.540 "get_zone_info": false, 00:15:50.540 "zone_management": false, 00:15:50.540 "zone_append": false, 00:15:50.540 "compare": true, 00:15:50.540 "compare_and_write": false, 00:15:50.540 "abort": true, 00:15:50.540 "seek_hole": false, 00:15:50.540 "seek_data": false, 00:15:50.540 "copy": true, 00:15:50.540 "nvme_iov_md": false 00:15:50.540 }, 00:15:50.540 "driver_specific": { 00:15:50.540 "gpt": { 00:15:50.540 "base_bdev": "Nvme1n1", 00:15:50.540 "offset_blocks": 655360, 00:15:50.540 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:15:50.540 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:15:50.540 "partition_name": "SPDK_TEST_second" 00:15:50.540 } 00:15:50.540 } 00:15:50.540 } 00:15:50.540 ]' 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63765 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63765 ']' 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63765 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63765 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.540 killing process with pid 63765 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63765' 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63765 00:15:50.540 22:57:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63765 00:15:53.074 00:15:53.074 real 0m4.544s 00:15:53.074 user 0m4.627s 00:15:53.074 sys 0m0.598s 00:15:53.074 22:57:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.074 22:57:20 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:15:53.074 ************************************ 00:15:53.074 END TEST bdev_gpt_uuid 00:15:53.074 ************************************ 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:15:53.074 22:57:20 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:53.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:53.901 Waiting for block devices as requested 00:15:54.159 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:54.160 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:54.160 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:54.418 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:59.712 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:59.712 22:57:26 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:15:59.712 22:57:26 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:15:59.712 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:59.712 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:59.712 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:59.712 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:59.712 22:57:27 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:15:59.712 00:15:59.712 real 1m7.972s 00:15:59.712 user 1m24.150s 00:15:59.712 sys 0m13.190s 00:15:59.712 22:57:27 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.712 22:57:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:15:59.712 ************************************ 00:15:59.712 END TEST blockdev_nvme_gpt 00:15:59.712 ************************************ 00:15:59.971 22:57:27 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:15:59.971 22:57:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:59.971 22:57:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.971 22:57:27 -- common/autotest_common.sh@10 -- # set +x 00:15:59.971 ************************************ 00:15:59.971 START TEST nvme 00:15:59.971 ************************************ 00:15:59.971 22:57:27 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:15:59.971 * Looking for test storage... 00:15:59.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:15:59.971 22:57:27 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:59.971 22:57:27 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:59.971 22:57:27 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:59.971 22:57:27 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:59.971 22:57:27 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:59.971 22:57:27 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:59.971 22:57:27 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:59.971 22:57:27 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.971 22:57:27 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:59.971 22:57:27 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:59.971 22:57:27 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:59.971 22:57:27 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:59.971 22:57:27 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:59.971 22:57:27 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:59.971 22:57:27 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:59.971 22:57:27 nvme -- scripts/common.sh@344 -- # case "$op" in 00:15:59.971 22:57:27 nvme -- scripts/common.sh@345 -- # : 1 00:15:59.971 22:57:27 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:59.971 22:57:27 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.971 22:57:27 nvme -- scripts/common.sh@365 -- # decimal 1 00:15:59.971 22:57:27 nvme -- scripts/common.sh@353 -- # local d=1 00:15:59.971 22:57:27 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.972 22:57:27 nvme -- scripts/common.sh@355 -- # echo 1 00:15:59.972 22:57:27 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.972 22:57:27 nvme -- scripts/common.sh@366 -- # decimal 2 00:15:59.972 22:57:27 nvme -- scripts/common.sh@353 -- # local d=2 00:15:59.972 22:57:27 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.972 22:57:27 nvme -- scripts/common.sh@355 -- # echo 2 00:15:59.972 22:57:27 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.972 22:57:27 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.972 22:57:27 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.972 22:57:27 nvme -- scripts/common.sh@368 -- # return 0 00:15:59.972 22:57:27 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.972 22:57:27 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.972 --rc genhtml_branch_coverage=1 00:15:59.972 --rc genhtml_function_coverage=1 00:15:59.972 --rc genhtml_legend=1 00:15:59.972 --rc geninfo_all_blocks=1 00:15:59.972 --rc geninfo_unexecuted_blocks=1 00:15:59.972 00:15:59.972 ' 00:15:59.972 22:57:27 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.972 --rc genhtml_branch_coverage=1 00:15:59.972 --rc genhtml_function_coverage=1 00:15:59.972 --rc genhtml_legend=1 00:15:59.972 --rc geninfo_all_blocks=1 00:15:59.972 --rc geninfo_unexecuted_blocks=1 00:15:59.972 00:15:59.972 ' 00:15:59.972 22:57:27 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.972 --rc genhtml_branch_coverage=1 00:15:59.972 --rc genhtml_function_coverage=1 00:15:59.972 --rc genhtml_legend=1 00:15:59.972 --rc geninfo_all_blocks=1 00:15:59.972 --rc geninfo_unexecuted_blocks=1 00:15:59.972 00:15:59.972 ' 00:15:59.972 22:57:27 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:59.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.972 --rc genhtml_branch_coverage=1 00:15:59.972 --rc genhtml_function_coverage=1 00:15:59.972 --rc genhtml_legend=1 00:15:59.972 --rc geninfo_all_blocks=1 00:15:59.972 --rc geninfo_unexecuted_blocks=1 00:15:59.972 00:15:59.972 ' 00:15:59.972 22:57:27 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:00.909 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:01.478 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.478 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.478 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.737 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:01.737 22:57:28 nvme -- nvme/nvme.sh@79 -- # uname 00:16:01.737 22:57:28 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:16:01.737 22:57:28 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:16:01.737 22:57:28 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1075 -- # stubpid=64424 00:16:01.737 Waiting for stub to ready for secondary processes... 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64424 ]] 00:16:01.737 22:57:28 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:01.737 [2024-12-09 22:57:29.025808] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:16:01.737 [2024-12-09 22:57:29.025966] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:16:02.673 22:57:29 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:02.673 22:57:29 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64424 ]] 00:16:02.673 22:57:29 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:16:03.241 [2024-12-09 22:57:30.560138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.501 [2024-12-09 22:57:30.680536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.501 [2024-12-09 22:57:30.680634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.501 [2024-12-09 22:57:30.680658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.501 [2024-12-09 22:57:30.699010] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:16:03.501 [2024-12-09 22:57:30.699083] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:16:03.501 [2024-12-09 22:57:30.720327] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:16:03.501 [2024-12-09 22:57:30.720594] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:16:03.501 [2024-12-09 22:57:30.725221] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:16:03.501 [2024-12-09 22:57:30.725565] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:16:03.501 [2024-12-09 22:57:30.725727] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:16:03.501 [2024-12-09 22:57:30.731250] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:16:03.501 [2024-12-09 22:57:30.731662] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:16:03.501 [2024-12-09 22:57:30.731886] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:16:03.501 [2024-12-09 22:57:30.737333] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:16:03.501 [2024-12-09 22:57:30.737680] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:16:03.501 [2024-12-09 22:57:30.737832] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:16:03.501 [2024-12-09 22:57:30.737952] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:16:03.501 [2024-12-09 22:57:30.738063] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:16:03.760 22:57:30 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:16:03.760 done. 00:16:03.760 22:57:30 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:16:03.760 22:57:30 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:03.760 22:57:30 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:16:03.760 22:57:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.760 22:57:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:03.760 ************************************ 00:16:03.760 START TEST nvme_reset 00:16:03.760 ************************************ 00:16:03.760 22:57:30 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:16:04.020 Initializing NVMe Controllers 00:16:04.020 Skipping QEMU NVMe SSD at 0000:00:10.0 00:16:04.020 Skipping QEMU NVMe SSD at 0000:00:11.0 00:16:04.020 Skipping QEMU NVMe SSD at 0000:00:13.0 00:16:04.020 Skipping QEMU NVMe SSD at 0000:00:12.0 00:16:04.020 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:16:04.020 00:16:04.020 real 0m0.308s 00:16:04.020 user 0m0.117s 00:16:04.020 sys 0m0.146s 00:16:04.020 22:57:31 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.020 22:57:31 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:16:04.020 ************************************ 00:16:04.020 END TEST nvme_reset 00:16:04.020 ************************************ 00:16:04.281 22:57:31 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:16:04.281 22:57:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:04.281 22:57:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.281 22:57:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.281 ************************************ 00:16:04.281 START TEST nvme_identify 00:16:04.281 ************************************ 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:16:04.281 22:57:31 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:16:04.281 22:57:31 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:16:04.281 22:57:31 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:16:04.281 22:57:31 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:04.281 22:57:31 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:04.281 22:57:31 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:16:04.544 [2024-12-09 22:57:31.753014] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64457 terminated unexpected 00:16:04.544 ===================================================== 00:16:04.544 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:04.544 ===================================================== 00:16:04.544 Controller Capabilities/Features 00:16:04.544 ================================ 00:16:04.544 Vendor ID: 1b36 00:16:04.544 Subsystem Vendor ID: 1af4 00:16:04.544 Serial Number: 12340 00:16:04.544 Model Number: QEMU NVMe Ctrl 00:16:04.544 Firmware Version: 8.0.0 00:16:04.544 Recommended Arb Burst: 6 00:16:04.544 IEEE OUI Identifier: 00 54 52 00:16:04.544 Multi-path I/O 00:16:04.544 May have multiple subsystem ports: No 00:16:04.544 May have multiple controllers: No 00:16:04.544 Associated with SR-IOV VF: No 00:16:04.544 Max Data Transfer Size: 524288 00:16:04.544 Max Number of Namespaces: 256 00:16:04.544 Max Number of I/O Queues: 64 00:16:04.544 NVMe Specification Version (VS): 1.4 00:16:04.544 NVMe Specification Version (Identify): 1.4 00:16:04.544 Maximum Queue Entries: 2048 00:16:04.544 Contiguous Queues Required: Yes 00:16:04.544 Arbitration Mechanisms Supported 00:16:04.544 Weighted Round Robin: Not Supported 00:16:04.544 Vendor Specific: Not Supported 00:16:04.544 Reset Timeout: 7500 ms 00:16:04.544 Doorbell Stride: 4 bytes 00:16:04.544 NVM Subsystem Reset: Not Supported 00:16:04.544 Command Sets Supported 00:16:04.544 NVM Command Set: Supported 00:16:04.544 Boot Partition: Not Supported 00:16:04.544 Memory Page Size Minimum: 4096 bytes 00:16:04.544 Memory Page Size Maximum: 65536 bytes 00:16:04.544 Persistent Memory Region: Not Supported 00:16:04.544 Optional Asynchronous Events Supported 00:16:04.544 Namespace Attribute Notices: Supported 00:16:04.544 Firmware Activation Notices: Not Supported 00:16:04.544 ANA Change Notices: Not Supported 00:16:04.544 PLE Aggregate Log Change Notices: Not Supported 00:16:04.544 LBA Status Info Alert Notices: Not Supported 00:16:04.544 EGE Aggregate Log Change Notices: Not Supported 00:16:04.544 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.544 Zone Descriptor Change Notices: Not Supported 00:16:04.544 Discovery Log Change Notices: Not Supported 00:16:04.544 Controller Attributes 00:16:04.544 128-bit Host Identifier: Not Supported 00:16:04.544 Non-Operational Permissive Mode: Not Supported 00:16:04.544 NVM Sets: Not Supported 00:16:04.544 Read Recovery Levels: Not Supported 00:16:04.544 Endurance Groups: Not Supported 00:16:04.544 Predictable Latency Mode: Not Supported 00:16:04.544 Traffic Based Keep ALive: Not Supported 00:16:04.544 Namespace Granularity: Not Supported 00:16:04.544 SQ Associations: Not Supported 00:16:04.544 UUID List: Not Supported 00:16:04.544 Multi-Domain Subsystem: Not Supported 00:16:04.544 Fixed Capacity Management: Not Supported 00:16:04.544 Variable Capacity Management: Not Supported 00:16:04.544 Delete Endurance Group: Not Supported 00:16:04.544 Delete NVM Set: Not Supported 00:16:04.544 Extended LBA Formats Supported: Supported 00:16:04.544 Flexible Data Placement Supported: Not Supported 00:16:04.544 00:16:04.544 Controller Memory Buffer Support 00:16:04.544 ================================ 00:16:04.544 Supported: No 00:16:04.544 00:16:04.544 Persistent Memory Region Support 00:16:04.544 ================================ 00:16:04.544 Supported: No 00:16:04.544 00:16:04.544 Admin Command Set Attributes 00:16:04.544 ============================ 00:16:04.544 Security Send/Receive: Not Supported 00:16:04.544 Format NVM: Supported 00:16:04.544 Firmware Activate/Download: Not Supported 00:16:04.544 Namespace Management: Supported 00:16:04.544 Device Self-Test: Not Supported 00:16:04.544 Directives: Supported 00:16:04.544 NVMe-MI: Not Supported 00:16:04.544 Virtualization Management: Not Supported 00:16:04.544 Doorbell Buffer Config: Supported 00:16:04.544 Get LBA Status Capability: Not Supported 00:16:04.544 Command & Feature Lockdown Capability: Not Supported 00:16:04.544 Abort Command Limit: 4 00:16:04.544 Async Event Request Limit: 4 00:16:04.544 Number of Firmware Slots: N/A 00:16:04.544 Firmware Slot 1 Read-Only: N/A 00:16:04.544 Firmware Activation Without Reset: N/A 00:16:04.544 Multiple Update Detection Support: N/A 00:16:04.544 Firmware Update Granularity: No Information Provided 00:16:04.544 Per-Namespace SMART Log: Yes 00:16:04.544 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.544 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:16:04.544 Command Effects Log Page: Supported 00:16:04.544 Get Log Page Extended Data: Supported 00:16:04.544 Telemetry Log Pages: Not Supported 00:16:04.544 Persistent Event Log Pages: Not Supported 00:16:04.544 Supported Log Pages Log Page: May Support 00:16:04.544 Commands Supported & Effects Log Page: Not Supported 00:16:04.544 Feature Identifiers & Effects Log Page:May Support 00:16:04.544 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.544 Data Area 4 for Telemetry Log: Not Supported 00:16:04.544 Error Log Page Entries Supported: 1 00:16:04.544 Keep Alive: Not Supported 00:16:04.544 00:16:04.544 NVM Command Set Attributes 00:16:04.544 ========================== 00:16:04.544 Submission Queue Entry Size 00:16:04.544 Max: 64 00:16:04.544 Min: 64 00:16:04.544 Completion Queue Entry Size 00:16:04.544 Max: 16 00:16:04.544 Min: 16 00:16:04.544 Number of Namespaces: 256 00:16:04.544 Compare Command: Supported 00:16:04.544 Write Uncorrectable Command: Not Supported 00:16:04.544 Dataset Management Command: Supported 00:16:04.544 Write Zeroes Command: Supported 00:16:04.544 Set Features Save Field: Supported 00:16:04.544 Reservations: Not Supported 00:16:04.544 Timestamp: Supported 00:16:04.544 Copy: Supported 00:16:04.544 Volatile Write Cache: Present 00:16:04.544 Atomic Write Unit (Normal): 1 00:16:04.544 Atomic Write Unit (PFail): 1 00:16:04.544 Atomic Compare & Write Unit: 1 00:16:04.544 Fused Compare & Write: Not Supported 00:16:04.544 Scatter-Gather List 00:16:04.544 SGL Command Set: Supported 00:16:04.544 SGL Keyed: Not Supported 00:16:04.544 SGL Bit Bucket Descriptor: Not Supported 00:16:04.544 SGL Metadata Pointer: Not Supported 00:16:04.544 Oversized SGL: Not Supported 00:16:04.544 SGL Metadata Address: Not Supported 00:16:04.544 SGL Offset: Not Supported 00:16:04.544 Transport SGL Data Block: Not Supported 00:16:04.544 Replay Protected Memory Block: Not Supported 00:16:04.544 00:16:04.544 Firmware Slot Information 00:16:04.544 ========================= 00:16:04.544 Active slot: 1 00:16:04.544 Slot 1 Firmware Revision: 1.0 00:16:04.544 00:16:04.544 00:16:04.544 Commands Supported and Effects 00:16:04.544 ============================== 00:16:04.544 Admin Commands 00:16:04.544 -------------- 00:16:04.544 Delete I/O Submission Queue (00h): Supported 00:16:04.544 Create I/O Submission Queue (01h): Supported 00:16:04.544 Get Log Page (02h): Supported 00:16:04.544 Delete I/O Completion Queue (04h): Supported 00:16:04.544 Create I/O Completion Queue (05h): Supported 00:16:04.544 Identify (06h): Supported 00:16:04.544 Abort (08h): Supported 00:16:04.544 Set Features (09h): Supported 00:16:04.544 Get Features (0Ah): Supported 00:16:04.544 Asynchronous Event Request (0Ch): Supported 00:16:04.544 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:04.544 Directive Send (19h): Supported 00:16:04.544 Directive Receive (1Ah): Supported 00:16:04.544 Virtualization Management (1Ch): Supported 00:16:04.544 Doorbell Buffer Config (7Ch): Supported 00:16:04.545 Format NVM (80h): Supported LBA-Change 00:16:04.545 I/O Commands 00:16:04.545 ------------ 00:16:04.545 Flush (00h): Supported LBA-Change 00:16:04.545 Write (01h): Supported LBA-Change 00:16:04.545 Read (02h): Supported 00:16:04.545 Compare (05h): Supported 00:16:04.545 Write Zeroes (08h): Supported LBA-Change 00:16:04.545 Dataset Management (09h): Supported LBA-Change 00:16:04.545 Unknown (0Ch): Supported 00:16:04.545 Unknown (12h): Supported 00:16:04.545 Copy (19h): Supported LBA-Change 00:16:04.545 Unknown (1Dh): Supported LBA-Change 00:16:04.545 00:16:04.545 Error Log 00:16:04.545 ========= 00:16:04.545 00:16:04.545 Arbitration 00:16:04.545 =========== 00:16:04.545 Arbitration Burst: no limit 00:16:04.545 00:16:04.545 Power Management 00:16:04.545 ================ 00:16:04.545 Number of Power States: 1 00:16:04.545 Current Power State: Power State #0 00:16:04.545 Power State #0: 00:16:04.545 Max Power: 25.00 W 00:16:04.545 Non-Operational State: Operational 00:16:04.545 Entry Latency: 16 microseconds 00:16:04.545 Exit Latency: 4 microseconds 00:16:04.545 Relative Read Throughput: 0 00:16:04.545 Relative Read Latency: 0 00:16:04.545 Relative Write Throughput: 0 00:16:04.545 Relative Write Latency: 0 00:16:04.545 Idle Power[2024-12-09 22:57:31.754785] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64457 terminated unexpected 00:16:04.545 : Not Reported 00:16:04.545 Active Power: Not Reported 00:16:04.545 Non-Operational Permissive Mode: Not Supported 00:16:04.545 00:16:04.545 Health Information 00:16:04.545 ================== 00:16:04.545 Critical Warnings: 00:16:04.545 Available Spare Space: OK 00:16:04.545 Temperature: OK 00:16:04.545 Device Reliability: OK 00:16:04.545 Read Only: No 00:16:04.545 Volatile Memory Backup: OK 00:16:04.545 Current Temperature: 323 Kelvin (50 Celsius) 00:16:04.545 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:04.545 Available Spare: 0% 00:16:04.545 Available Spare Threshold: 0% 00:16:04.545 Life Percentage Used: 0% 00:16:04.545 Data Units Read: 766 00:16:04.545 Data Units Written: 694 00:16:04.545 Host Read Commands: 36720 00:16:04.545 Host Write Commands: 36506 00:16:04.545 Controller Busy Time: 0 minutes 00:16:04.545 Power Cycles: 0 00:16:04.545 Power On Hours: 0 hours 00:16:04.545 Unsafe Shutdowns: 0 00:16:04.545 Unrecoverable Media Errors: 0 00:16:04.545 Lifetime Error Log Entries: 0 00:16:04.545 Warning Temperature Time: 0 minutes 00:16:04.545 Critical Temperature Time: 0 minutes 00:16:04.545 00:16:04.545 Number of Queues 00:16:04.545 ================ 00:16:04.545 Number of I/O Submission Queues: 64 00:16:04.545 Number of I/O Completion Queues: 64 00:16:04.545 00:16:04.545 ZNS Specific Controller Data 00:16:04.545 ============================ 00:16:04.545 Zone Append Size Limit: 0 00:16:04.545 00:16:04.545 00:16:04.545 Active Namespaces 00:16:04.545 ================= 00:16:04.545 Namespace ID:1 00:16:04.545 Error Recovery Timeout: Unlimited 00:16:04.545 Command Set Identifier: NVM (00h) 00:16:04.545 Deallocate: Supported 00:16:04.545 Deallocated/Unwritten Error: Supported 00:16:04.545 Deallocated Read Value: All 0x00 00:16:04.545 Deallocate in Write Zeroes: Not Supported 00:16:04.545 Deallocated Guard Field: 0xFFFF 00:16:04.545 Flush: Supported 00:16:04.545 Reservation: Not Supported 00:16:04.545 Metadata Transferred as: Separate Metadata Buffer 00:16:04.545 Namespace Sharing Capabilities: Private 00:16:04.545 Size (in LBAs): 1548666 (5GiB) 00:16:04.545 Capacity (in LBAs): 1548666 (5GiB) 00:16:04.545 Utilization (in LBAs): 1548666 (5GiB) 00:16:04.545 Thin Provisioning: Not Supported 00:16:04.545 Per-NS Atomic Units: No 00:16:04.545 Maximum Single Source Range Length: 128 00:16:04.545 Maximum Copy Length: 128 00:16:04.545 Maximum Source Range Count: 128 00:16:04.545 NGUID/EUI64 Never Reused: No 00:16:04.545 Namespace Write Protected: No 00:16:04.545 Number of LBA Formats: 8 00:16:04.545 Current LBA Format: LBA Format #07 00:16:04.545 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.545 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:04.545 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:04.545 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:04.545 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:04.545 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:04.545 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:04.545 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:04.545 00:16:04.545 NVM Specific Namespace Data 00:16:04.545 =========================== 00:16:04.545 Logical Block Storage Tag Mask: 0 00:16:04.545 Protection Information Capabilities: 00:16:04.545 16b Guard Protection Information Storage Tag Support: No 00:16:04.545 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:04.545 Storage Tag Check Read Support: No 00:16:04.545 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.545 ===================================================== 00:16:04.545 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:04.545 ===================================================== 00:16:04.545 Controller Capabilities/Features 00:16:04.545 ================================ 00:16:04.545 Vendor ID: 1b36 00:16:04.545 Subsystem Vendor ID: 1af4 00:16:04.545 Serial Number: 12341 00:16:04.545 Model Number: QEMU NVMe Ctrl 00:16:04.545 Firmware Version: 8.0.0 00:16:04.545 Recommended Arb Burst: 6 00:16:04.545 IEEE OUI Identifier: 00 54 52 00:16:04.545 Multi-path I/O 00:16:04.545 May have multiple subsystem ports: No 00:16:04.545 May have multiple controllers: No 00:16:04.545 Associated with SR-IOV VF: No 00:16:04.545 Max Data Transfer Size: 524288 00:16:04.545 Max Number of Namespaces: 256 00:16:04.545 Max Number of I/O Queues: 64 00:16:04.545 NVMe Specification Version (VS): 1.4 00:16:04.545 NVMe Specification Version (Identify): 1.4 00:16:04.545 Maximum Queue Entries: 2048 00:16:04.545 Contiguous Queues Required: Yes 00:16:04.545 Arbitration Mechanisms Supported 00:16:04.545 Weighted Round Robin: Not Supported 00:16:04.545 Vendor Specific: Not Supported 00:16:04.545 Reset Timeout: 7500 ms 00:16:04.545 Doorbell Stride: 4 bytes 00:16:04.545 NVM Subsystem Reset: Not Supported 00:16:04.545 Command Sets Supported 00:16:04.545 NVM Command Set: Supported 00:16:04.545 Boot Partition: Not Supported 00:16:04.545 Memory Page Size Minimum: 4096 bytes 00:16:04.545 Memory Page Size Maximum: 65536 bytes 00:16:04.545 Persistent Memory Region: Not Supported 00:16:04.545 Optional Asynchronous Events Supported 00:16:04.545 Namespace Attribute Notices: Supported 00:16:04.545 Firmware Activation Notices: Not Supported 00:16:04.545 ANA Change Notices: Not Supported 00:16:04.545 PLE Aggregate Log Change Notices: Not Supported 00:16:04.545 LBA Status Info Alert Notices: Not Supported 00:16:04.545 EGE Aggregate Log Change Notices: Not Supported 00:16:04.545 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.545 Zone Descriptor Change Notices: Not Supported 00:16:04.545 Discovery Log Change Notices: Not Supported 00:16:04.545 Controller Attributes 00:16:04.545 128-bit Host Identifier: Not Supported 00:16:04.545 Non-Operational Permissive Mode: Not Supported 00:16:04.545 NVM Sets: Not Supported 00:16:04.545 Read Recovery Levels: Not Supported 00:16:04.545 Endurance Groups: Not Supported 00:16:04.545 Predictable Latency Mode: Not Supported 00:16:04.545 Traffic Based Keep ALive: Not Supported 00:16:04.545 Namespace Granularity: Not Supported 00:16:04.545 SQ Associations: Not Supported 00:16:04.545 UUID List: Not Supported 00:16:04.545 Multi-Domain Subsystem: Not Supported 00:16:04.545 Fixed Capacity Management: Not Supported 00:16:04.545 Variable Capacity Management: Not Supported 00:16:04.545 Delete Endurance Group: Not Supported 00:16:04.545 Delete NVM Set: Not Supported 00:16:04.545 Extended LBA Formats Supported: Supported 00:16:04.545 Flexible Data Placement Supported: Not Supported 00:16:04.545 00:16:04.545 Controller Memory Buffer Support 00:16:04.546 ================================ 00:16:04.546 Supported: No 00:16:04.546 00:16:04.546 Persistent Memory Region Support 00:16:04.546 ================================ 00:16:04.546 Supported: No 00:16:04.546 00:16:04.546 Admin Command Set Attributes 00:16:04.546 ============================ 00:16:04.546 Security Send/Receive: Not Supported 00:16:04.546 Format NVM: Supported 00:16:04.546 Firmware Activate/Download: Not Supported 00:16:04.546 Namespace Management: Supported 00:16:04.546 Device Self-Test: Not Supported 00:16:04.546 Directives: Supported 00:16:04.546 NVMe-MI: Not Supported 00:16:04.546 Virtualization Management: Not Supported 00:16:04.546 Doorbell Buffer Config: Supported 00:16:04.546 Get LBA Status Capability: Not Supported 00:16:04.546 Command & Feature Lockdown Capability: Not Supported 00:16:04.546 Abort Command Limit: 4 00:16:04.546 Async Event Request Limit: 4 00:16:04.546 Number of Firmware Slots: N/A 00:16:04.546 Firmware Slot 1 Read-Only: N/A 00:16:04.546 Firmware Activation Without Reset: N/A 00:16:04.546 Multiple Update Detection Support: N/A 00:16:04.546 Firmware Update Granularity: No Information Provided 00:16:04.546 Per-Namespace SMART Log: Yes 00:16:04.546 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.546 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:16:04.546 Command Effects Log Page: Supported 00:16:04.546 Get Log Page Extended Data: Supported 00:16:04.546 Telemetry Log Pages: Not Supported 00:16:04.546 Persistent Event Log Pages: Not Supported 00:16:04.546 Supported Log Pages Log Page: May Support 00:16:04.546 Commands Supported & Effects Log Page: Not Supported 00:16:04.546 Feature Identifiers & Effects Log Page:May Support 00:16:04.546 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.546 Data Area 4 for Telemetry Log: Not Supported 00:16:04.546 Error Log Page Entries Supported: 1 00:16:04.546 Keep Alive: Not Supported 00:16:04.546 00:16:04.546 NVM Command Set Attributes 00:16:04.546 ========================== 00:16:04.546 Submission Queue Entry Size 00:16:04.546 Max: 64 00:16:04.546 Min: 64 00:16:04.546 Completion Queue Entry Size 00:16:04.546 Max: 16 00:16:04.546 Min: 16 00:16:04.546 Number of Namespaces: 256 00:16:04.546 Compare Command: Supported 00:16:04.546 Write Uncorrectable Command: Not Supported 00:16:04.546 Dataset Management Command: Supported 00:16:04.546 Write Zeroes Command: Supported 00:16:04.546 Set Features Save Field: Supported 00:16:04.546 Reservations: Not Supported 00:16:04.546 Timestamp: Supported 00:16:04.546 Copy: Supported 00:16:04.546 Volatile Write Cache: Present 00:16:04.546 Atomic Write Unit (Normal): 1 00:16:04.546 Atomic Write Unit (PFail): 1 00:16:04.546 Atomic Compare & Write Unit: 1 00:16:04.546 Fused Compare & Write: Not Supported 00:16:04.546 Scatter-Gather List 00:16:04.546 SGL Command Set: Supported 00:16:04.546 SGL Keyed: Not Supported 00:16:04.546 SGL Bit Bucket Descriptor: Not Supported 00:16:04.546 SGL Metadata Pointer: Not Supported 00:16:04.546 Oversized SGL: Not Supported 00:16:04.546 SGL Metadata Address: Not Supported 00:16:04.546 SGL Offset: Not Supported 00:16:04.546 Transport SGL Data Block: Not Supported 00:16:04.546 Replay Protected Memory Block: Not Supported 00:16:04.546 00:16:04.546 Firmware Slot Information 00:16:04.546 ========================= 00:16:04.546 Active slot: 1 00:16:04.546 Slot 1 Firmware Revision: 1.0 00:16:04.546 00:16:04.546 00:16:04.546 Commands Supported and Effects 00:16:04.546 ============================== 00:16:04.546 Admin Commands 00:16:04.546 -------------- 00:16:04.546 Delete I/O Submission Queue (00h): Supported 00:16:04.546 Create I/O Submission Queue (01h): Supported 00:16:04.546 Get Log Page (02h): Supported 00:16:04.546 Delete I/O Completion Queue (04h): Supported 00:16:04.546 Create I/O Completion Queue (05h): Supported 00:16:04.546 Identify (06h): Supported 00:16:04.546 Abort (08h): Supported 00:16:04.546 Set Features (09h): Supported 00:16:04.546 Get Features (0Ah): Supported 00:16:04.546 Asynchronous Event Request (0Ch): Supported 00:16:04.546 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:04.546 Directive Send (19h): Supported 00:16:04.546 Directive Receive (1Ah): Supported 00:16:04.546 Virtualization Management (1Ch): Supported 00:16:04.546 Doorbell Buffer Config (7Ch): Supported 00:16:04.546 Format NVM (80h): Supported LBA-Change 00:16:04.546 I/O Commands 00:16:04.546 ------------ 00:16:04.546 Flush (00h): Supported LBA-Change 00:16:04.546 Write (01h): Supported LBA-Change 00:16:04.546 Read (02h): Supported 00:16:04.546 Compare (05h): Supported 00:16:04.546 Write Zeroes (08h): Supported LBA-Change 00:16:04.546 Dataset Management (09h): Supported LBA-Change 00:16:04.546 Unknown (0Ch): Supported 00:16:04.546 Unknown (12h): Supported 00:16:04.546 Copy (19h): Supported LBA-Change 00:16:04.546 Unknown (1Dh): Supported LBA-Change 00:16:04.546 00:16:04.546 Error Log 00:16:04.546 ========= 00:16:04.546 00:16:04.546 Arbitration 00:16:04.546 =========== 00:16:04.546 Arbitration Burst: no limit 00:16:04.546 00:16:04.546 Power Management 00:16:04.546 ================ 00:16:04.546 Number of Power States: 1 00:16:04.546 Current Power State: Power State #0 00:16:04.546 Power State #0: 00:16:04.546 Max Power: 25.00 W 00:16:04.546 Non-Operational State: Operational 00:16:04.546 Entry Latency: 16 microseconds 00:16:04.546 Exit Latency: 4 microseconds 00:16:04.546 Relative Read Throughput: 0 00:16:04.546 Relative Read Latency: 0 00:16:04.546 Relative Write Throughput: 0 00:16:04.546 Relative Write Latency: 0 00:16:04.546 Idle Power: Not Reported 00:16:04.546 Active Power: Not Reported 00:16:04.546 Non-Operational Permissive Mode: Not Supported 00:16:04.546 00:16:04.546 Health Information 00:16:04.546 ================== 00:16:04.546 Critical Warnings: 00:16:04.546 Available Spare Space: OK 00:16:04.546 Temperature: [2024-12-09 22:57:31.755684] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64457 terminated unexpected 00:16:04.546 OK 00:16:04.546 Device Reliability: OK 00:16:04.546 Read Only: No 00:16:04.546 Volatile Memory Backup: OK 00:16:04.546 Current Temperature: 323 Kelvin (50 Celsius) 00:16:04.546 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:04.546 Available Spare: 0% 00:16:04.546 Available Spare Threshold: 0% 00:16:04.546 Life Percentage Used: 0% 00:16:04.546 Data Units Read: 1189 00:16:04.546 Data Units Written: 1055 00:16:04.546 Host Read Commands: 55182 00:16:04.546 Host Write Commands: 53975 00:16:04.546 Controller Busy Time: 0 minutes 00:16:04.546 Power Cycles: 0 00:16:04.546 Power On Hours: 0 hours 00:16:04.546 Unsafe Shutdowns: 0 00:16:04.546 Unrecoverable Media Errors: 0 00:16:04.546 Lifetime Error Log Entries: 0 00:16:04.546 Warning Temperature Time: 0 minutes 00:16:04.546 Critical Temperature Time: 0 minutes 00:16:04.546 00:16:04.546 Number of Queues 00:16:04.546 ================ 00:16:04.546 Number of I/O Submission Queues: 64 00:16:04.546 Number of I/O Completion Queues: 64 00:16:04.546 00:16:04.546 ZNS Specific Controller Data 00:16:04.546 ============================ 00:16:04.546 Zone Append Size Limit: 0 00:16:04.546 00:16:04.546 00:16:04.546 Active Namespaces 00:16:04.546 ================= 00:16:04.546 Namespace ID:1 00:16:04.546 Error Recovery Timeout: Unlimited 00:16:04.546 Command Set Identifier: NVM (00h) 00:16:04.546 Deallocate: Supported 00:16:04.546 Deallocated/Unwritten Error: Supported 00:16:04.546 Deallocated Read Value: All 0x00 00:16:04.546 Deallocate in Write Zeroes: Not Supported 00:16:04.546 Deallocated Guard Field: 0xFFFF 00:16:04.546 Flush: Supported 00:16:04.546 Reservation: Not Supported 00:16:04.546 Namespace Sharing Capabilities: Private 00:16:04.546 Size (in LBAs): 1310720 (5GiB) 00:16:04.546 Capacity (in LBAs): 1310720 (5GiB) 00:16:04.546 Utilization (in LBAs): 1310720 (5GiB) 00:16:04.546 Thin Provisioning: Not Supported 00:16:04.546 Per-NS Atomic Units: No 00:16:04.546 Maximum Single Source Range Length: 128 00:16:04.546 Maximum Copy Length: 128 00:16:04.546 Maximum Source Range Count: 128 00:16:04.546 NGUID/EUI64 Never Reused: No 00:16:04.546 Namespace Write Protected: No 00:16:04.546 Number of LBA Formats: 8 00:16:04.546 Current LBA Format: LBA Format #04 00:16:04.546 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.546 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:04.546 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:04.546 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:04.546 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:04.546 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:04.546 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:04.546 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:04.546 00:16:04.546 NVM Specific Namespace Data 00:16:04.546 =========================== 00:16:04.546 Logical Block Storage Tag Mask: 0 00:16:04.546 Protection Information Capabilities: 00:16:04.546 16b Guard Protection Information Storage Tag Support: No 00:16:04.547 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:04.547 Storage Tag Check Read Support: No 00:16:04.547 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.547 ===================================================== 00:16:04.547 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:04.547 ===================================================== 00:16:04.547 Controller Capabilities/Features 00:16:04.547 ================================ 00:16:04.547 Vendor ID: 1b36 00:16:04.547 Subsystem Vendor ID: 1af4 00:16:04.547 Serial Number: 12343 00:16:04.547 Model Number: QEMU NVMe Ctrl 00:16:04.547 Firmware Version: 8.0.0 00:16:04.547 Recommended Arb Burst: 6 00:16:04.547 IEEE OUI Identifier: 00 54 52 00:16:04.547 Multi-path I/O 00:16:04.547 May have multiple subsystem ports: No 00:16:04.547 May have multiple controllers: Yes 00:16:04.547 Associated with SR-IOV VF: No 00:16:04.547 Max Data Transfer Size: 524288 00:16:04.547 Max Number of Namespaces: 256 00:16:04.547 Max Number of I/O Queues: 64 00:16:04.547 NVMe Specification Version (VS): 1.4 00:16:04.547 NVMe Specification Version (Identify): 1.4 00:16:04.547 Maximum Queue Entries: 2048 00:16:04.547 Contiguous Queues Required: Yes 00:16:04.547 Arbitration Mechanisms Supported 00:16:04.547 Weighted Round Robin: Not Supported 00:16:04.547 Vendor Specific: Not Supported 00:16:04.547 Reset Timeout: 7500 ms 00:16:04.547 Doorbell Stride: 4 bytes 00:16:04.547 NVM Subsystem Reset: Not Supported 00:16:04.547 Command Sets Supported 00:16:04.547 NVM Command Set: Supported 00:16:04.547 Boot Partition: Not Supported 00:16:04.547 Memory Page Size Minimum: 4096 bytes 00:16:04.547 Memory Page Size Maximum: 65536 bytes 00:16:04.547 Persistent Memory Region: Not Supported 00:16:04.547 Optional Asynchronous Events Supported 00:16:04.547 Namespace Attribute Notices: Supported 00:16:04.547 Firmware Activation Notices: Not Supported 00:16:04.547 ANA Change Notices: Not Supported 00:16:04.547 PLE Aggregate Log Change Notices: Not Supported 00:16:04.547 LBA Status Info Alert Notices: Not Supported 00:16:04.547 EGE Aggregate Log Change Notices: Not Supported 00:16:04.547 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.547 Zone Descriptor Change Notices: Not Supported 00:16:04.547 Discovery Log Change Notices: Not Supported 00:16:04.547 Controller Attributes 00:16:04.547 128-bit Host Identifier: Not Supported 00:16:04.547 Non-Operational Permissive Mode: Not Supported 00:16:04.547 NVM Sets: Not Supported 00:16:04.547 Read Recovery Levels: Not Supported 00:16:04.547 Endurance Groups: Supported 00:16:04.547 Predictable Latency Mode: Not Supported 00:16:04.547 Traffic Based Keep ALive: Not Supported 00:16:04.547 Namespace Granularity: Not Supported 00:16:04.547 SQ Associations: Not Supported 00:16:04.547 UUID List: Not Supported 00:16:04.547 Multi-Domain Subsystem: Not Supported 00:16:04.547 Fixed Capacity Management: Not Supported 00:16:04.547 Variable Capacity Management: Not Supported 00:16:04.547 Delete Endurance Group: Not Supported 00:16:04.547 Delete NVM Set: Not Supported 00:16:04.547 Extended LBA Formats Supported: Supported 00:16:04.547 Flexible Data Placement Supported: Supported 00:16:04.547 00:16:04.547 Controller Memory Buffer Support 00:16:04.547 ================================ 00:16:04.547 Supported: No 00:16:04.547 00:16:04.547 Persistent Memory Region Support 00:16:04.547 ================================ 00:16:04.547 Supported: No 00:16:04.547 00:16:04.547 Admin Command Set Attributes 00:16:04.547 ============================ 00:16:04.547 Security Send/Receive: Not Supported 00:16:04.547 Format NVM: Supported 00:16:04.547 Firmware Activate/Download: Not Supported 00:16:04.547 Namespace Management: Supported 00:16:04.547 Device Self-Test: Not Supported 00:16:04.547 Directives: Supported 00:16:04.547 NVMe-MI: Not Supported 00:16:04.547 Virtualization Management: Not Supported 00:16:04.547 Doorbell Buffer Config: Supported 00:16:04.547 Get LBA Status Capability: Not Supported 00:16:04.547 Command & Feature Lockdown Capability: Not Supported 00:16:04.547 Abort Command Limit: 4 00:16:04.547 Async Event Request Limit: 4 00:16:04.547 Number of Firmware Slots: N/A 00:16:04.547 Firmware Slot 1 Read-Only: N/A 00:16:04.547 Firmware Activation Without Reset: N/A 00:16:04.547 Multiple Update Detection Support: N/A 00:16:04.547 Firmware Update Granularity: No Information Provided 00:16:04.547 Per-Namespace SMART Log: Yes 00:16:04.547 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.547 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:16:04.547 Command Effects Log Page: Supported 00:16:04.547 Get Log Page Extended Data: Supported 00:16:04.547 Telemetry Log Pages: Not Supported 00:16:04.547 Persistent Event Log Pages: Not Supported 00:16:04.547 Supported Log Pages Log Page: May Support 00:16:04.547 Commands Supported & Effects Log Page: Not Supported 00:16:04.547 Feature Identifiers & Effects Log Page:May Support 00:16:04.547 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.547 Data Area 4 for Telemetry Log: Not Supported 00:16:04.547 Error Log Page Entries Supported: 1 00:16:04.547 Keep Alive: Not Supported 00:16:04.547 00:16:04.547 NVM Command Set Attributes 00:16:04.547 ========================== 00:16:04.547 Submission Queue Entry Size 00:16:04.547 Max: 64 00:16:04.547 Min: 64 00:16:04.547 Completion Queue Entry Size 00:16:04.547 Max: 16 00:16:04.547 Min: 16 00:16:04.547 Number of Namespaces: 256 00:16:04.547 Compare Command: Supported 00:16:04.547 Write Uncorrectable Command: Not Supported 00:16:04.547 Dataset Management Command: Supported 00:16:04.547 Write Zeroes Command: Supported 00:16:04.547 Set Features Save Field: Supported 00:16:04.547 Reservations: Not Supported 00:16:04.547 Timestamp: Supported 00:16:04.547 Copy: Supported 00:16:04.547 Volatile Write Cache: Present 00:16:04.547 Atomic Write Unit (Normal): 1 00:16:04.547 Atomic Write Unit (PFail): 1 00:16:04.547 Atomic Compare & Write Unit: 1 00:16:04.547 Fused Compare & Write: Not Supported 00:16:04.547 Scatter-Gather List 00:16:04.547 SGL Command Set: Supported 00:16:04.547 SGL Keyed: Not Supported 00:16:04.547 SGL Bit Bucket Descriptor: Not Supported 00:16:04.547 SGL Metadata Pointer: Not Supported 00:16:04.547 Oversized SGL: Not Supported 00:16:04.547 SGL Metadata Address: Not Supported 00:16:04.547 SGL Offset: Not Supported 00:16:04.547 Transport SGL Data Block: Not Supported 00:16:04.547 Replay Protected Memory Block: Not Supported 00:16:04.547 00:16:04.547 Firmware Slot Information 00:16:04.547 ========================= 00:16:04.547 Active slot: 1 00:16:04.547 Slot 1 Firmware Revision: 1.0 00:16:04.547 00:16:04.547 00:16:04.547 Commands Supported and Effects 00:16:04.547 ============================== 00:16:04.547 Admin Commands 00:16:04.547 -------------- 00:16:04.547 Delete I/O Submission Queue (00h): Supported 00:16:04.547 Create I/O Submission Queue (01h): Supported 00:16:04.547 Get Log Page (02h): Supported 00:16:04.547 Delete I/O Completion Queue (04h): Supported 00:16:04.547 Create I/O Completion Queue (05h): Supported 00:16:04.547 Identify (06h): Supported 00:16:04.547 Abort (08h): Supported 00:16:04.547 Set Features (09h): Supported 00:16:04.547 Get Features (0Ah): Supported 00:16:04.547 Asynchronous Event Request (0Ch): Supported 00:16:04.547 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:04.547 Directive Send (19h): Supported 00:16:04.547 Directive Receive (1Ah): Supported 00:16:04.547 Virtualization Management (1Ch): Supported 00:16:04.547 Doorbell Buffer Config (7Ch): Supported 00:16:04.547 Format NVM (80h): Supported LBA-Change 00:16:04.547 I/O Commands 00:16:04.547 ------------ 00:16:04.547 Flush (00h): Supported LBA-Change 00:16:04.547 Write (01h): Supported LBA-Change 00:16:04.547 Read (02h): Supported 00:16:04.547 Compare (05h): Supported 00:16:04.547 Write Zeroes (08h): Supported LBA-Change 00:16:04.547 Dataset Management (09h): Supported LBA-Change 00:16:04.547 Unknown (0Ch): Supported 00:16:04.547 Unknown (12h): Supported 00:16:04.547 Copy (19h): Supported LBA-Change 00:16:04.547 Unknown (1Dh): Supported LBA-Change 00:16:04.547 00:16:04.547 Error Log 00:16:04.547 ========= 00:16:04.547 00:16:04.547 Arbitration 00:16:04.547 =========== 00:16:04.547 Arbitration Burst: no limit 00:16:04.547 00:16:04.547 Power Management 00:16:04.547 ================ 00:16:04.547 Number of Power States: 1 00:16:04.548 Current Power State: Power State #0 00:16:04.548 Power State #0: 00:16:04.548 Max Power: 25.00 W 00:16:04.548 Non-Operational State: Operational 00:16:04.548 Entry Latency: 16 microseconds 00:16:04.548 Exit Latency: 4 microseconds 00:16:04.548 Relative Read Throughput: 0 00:16:04.548 Relative Read Latency: 0 00:16:04.548 Relative Write Throughput: 0 00:16:04.548 Relative Write Latency: 0 00:16:04.548 Idle Power: Not Reported 00:16:04.548 Active Power: Not Reported 00:16:04.548 Non-Operational Permissive Mode: Not Supported 00:16:04.548 00:16:04.548 Health Information 00:16:04.548 ================== 00:16:04.548 Critical Warnings: 00:16:04.548 Available Spare Space: OK 00:16:04.548 Temperature: OK 00:16:04.548 Device Reliability: OK 00:16:04.548 Read Only: No 00:16:04.548 Volatile Memory Backup: OK 00:16:04.548 Current Temperature: 323 Kelvin (50 Celsius) 00:16:04.548 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:04.548 Available Spare: 0% 00:16:04.548 Available Spare Threshold: 0% 00:16:04.548 Life Percentage Used: 0% 00:16:04.548 Data Units Read: 904 00:16:04.548 Data Units Written: 833 00:16:04.548 Host Read Commands: 38223 00:16:04.548 Host Write Commands: 37646 00:16:04.548 Controller Busy Time: 0 minutes 00:16:04.548 Power Cycles: 0 00:16:04.548 Power On Hours: 0 hours 00:16:04.548 Unsafe Shutdowns: 0 00:16:04.548 Unrecoverable Media Errors: 0 00:16:04.548 Lifetime Error Log Entries: 0 00:16:04.548 Warning Temperature Time: 0 minutes 00:16:04.548 Critical Temperature Time: 0 minutes 00:16:04.548 00:16:04.548 Number of Queues 00:16:04.548 ================ 00:16:04.548 Number of I/O Submission Queues: 64 00:16:04.548 Number of I/O Completion Queues: 64 00:16:04.548 00:16:04.548 ZNS Specific Controller Data 00:16:04.548 ============================ 00:16:04.548 Zone Append Size Limit: 0 00:16:04.548 00:16:04.548 00:16:04.548 Active Namespaces 00:16:04.548 ================= 00:16:04.548 Namespace ID:1 00:16:04.548 Error Recovery Timeout: Unlimited 00:16:04.548 Command Set Identifier: NVM (00h) 00:16:04.548 Deallocate: Supported 00:16:04.548 Deallocated/Unwritten Error: Supported 00:16:04.548 Deallocated Read Value: All 0x00 00:16:04.548 Deallocate in Write Zeroes: Not Supported 00:16:04.548 Deallocated Guard Field: 0xFFFF 00:16:04.548 Flush: Supported 00:16:04.548 Reservation: Not Supported 00:16:04.548 Namespace Sharing Capabilities: Multiple Controllers 00:16:04.548 Size (in LBAs): 262144 (1GiB) 00:16:04.548 Capacity (in LBAs): 262144 (1GiB) 00:16:04.548 Utilization (in LBAs): 262144 (1GiB) 00:16:04.548 Thin Provisioning: Not Supported 00:16:04.548 Per-NS Atomic Units: No 00:16:04.548 Maximum Single Source Range Length: 128 00:16:04.548 Maximum Copy Length: 128 00:16:04.548 Maximum Source Range Count: 128 00:16:04.548 NGUID/EUI64 Never Reused: No 00:16:04.548 Namespace Write Protected: No 00:16:04.548 Endurance group ID: 1 00:16:04.548 Number of LBA Formats: 8 00:16:04.548 Current LBA Format: LBA Format #04 00:16:04.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.548 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:04.548 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:04.548 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:04.548 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:04.548 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:04.548 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:04.548 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:04.548 00:16:04.548 Get Feature FDP: 00:16:04.548 ================ 00:16:04.548 Enabled: Yes 00:16:04.548 FDP configuration index: 0 00:16:04.548 00:16:04.548 FDP configurations log page 00:16:04.548 =========================== 00:16:04.548 Number of FDP configurations: 1 00:16:04.548 Version: 0 00:16:04.548 Size: 112 00:16:04.548 FDP Configuration Descriptor: 0 00:16:04.548 Descriptor Size: 96 00:16:04.548 Reclaim Group Identifier format: 2 00:16:04.548 FDP Volatile Write Cache: Not Present 00:16:04.548 FDP Configuration: Valid 00:16:04.548 Vendor Specific Size: 0 00:16:04.548 Number of Reclaim Groups: 2 00:16:04.548 Number of Recalim Unit Handles: 8 00:16:04.548 Max Placement Identifiers: 128 00:16:04.548 Number of Namespaces Suppprted: 256 00:16:04.548 Reclaim unit Nominal Size: 6000000 bytes 00:16:04.548 Estimated Reclaim Unit Time Limit: Not Reported 00:16:04.548 RUH Desc #000: RUH Type: Initially Isolated 00:16:04.548 RUH Desc #001: RUH Type: Initially Isolated 00:16:04.548 RUH Desc #002: RUH Type: Initially Isolated 00:16:04.548 RUH Desc #003: RUH Type: Initially Isolated 00:16:04.548 RUH Desc #004: RUH Type: Initially Isolated 00:16:04.548 RUH Desc #005: RUH Type: Initially Isolated 00:16:04.548 RUH Desc #006: RUH Type: Initially Isolated 00:16:04.548 RUH Desc #007: RUH Type: Initially Isolated 00:16:04.548 00:16:04.548 FDP reclaim unit handle usage log page 00:16:04.548 ====================================== 00:16:04.548 Number of Reclaim Unit Handles: 8 00:16:04.548 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:04.548 RUH Usage Desc #001: RUH Attributes: Unused 00:16:04.548 RUH Usage Desc #002: RUH Attributes: Unused 00:16:04.548 RUH Usage Desc #003: RUH Attributes: Unused 00:16:04.548 RUH Usage Desc #004: RUH Attributes: Unused 00:16:04.548 RUH Usage Desc #005: RUH Attributes: Unused 00:16:04.548 RUH Usage Desc #006: RUH Attributes: Unused 00:16:04.548 RUH Usage Desc #007: RUH Attributes: Unused 00:16:04.548 00:16:04.548 FDP statistics log page 00:16:04.548 ======================= 00:16:04.548 Host bytes with metadata written: 526884864 00:16:04.548 Med[2024-12-09 22:57:31.757637] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64457 terminated unexpected 00:16:04.548 ia bytes with metadata written: 526942208 00:16:04.548 Media bytes erased: 0 00:16:04.548 00:16:04.548 FDP events log page 00:16:04.548 =================== 00:16:04.548 Number of FDP events: 0 00:16:04.548 00:16:04.548 NVM Specific Namespace Data 00:16:04.548 =========================== 00:16:04.548 Logical Block Storage Tag Mask: 0 00:16:04.549 Protection Information Capabilities: 00:16:04.549 16b Guard Protection Information Storage Tag Support: No 00:16:04.549 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:04.549 Storage Tag Check Read Support: No 00:16:04.549 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.549 ===================================================== 00:16:04.549 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:04.549 ===================================================== 00:16:04.549 Controller Capabilities/Features 00:16:04.549 ================================ 00:16:04.549 Vendor ID: 1b36 00:16:04.549 Subsystem Vendor ID: 1af4 00:16:04.549 Serial Number: 12342 00:16:04.549 Model Number: QEMU NVMe Ctrl 00:16:04.549 Firmware Version: 8.0.0 00:16:04.549 Recommended Arb Burst: 6 00:16:04.549 IEEE OUI Identifier: 00 54 52 00:16:04.549 Multi-path I/O 00:16:04.549 May have multiple subsystem ports: No 00:16:04.549 May have multiple controllers: No 00:16:04.549 Associated with SR-IOV VF: No 00:16:04.549 Max Data Transfer Size: 524288 00:16:04.549 Max Number of Namespaces: 256 00:16:04.549 Max Number of I/O Queues: 64 00:16:04.549 NVMe Specification Version (VS): 1.4 00:16:04.549 NVMe Specification Version (Identify): 1.4 00:16:04.549 Maximum Queue Entries: 2048 00:16:04.549 Contiguous Queues Required: Yes 00:16:04.549 Arbitration Mechanisms Supported 00:16:04.549 Weighted Round Robin: Not Supported 00:16:04.549 Vendor Specific: Not Supported 00:16:04.549 Reset Timeout: 7500 ms 00:16:04.549 Doorbell Stride: 4 bytes 00:16:04.549 NVM Subsystem Reset: Not Supported 00:16:04.549 Command Sets Supported 00:16:04.549 NVM Command Set: Supported 00:16:04.549 Boot Partition: Not Supported 00:16:04.549 Memory Page Size Minimum: 4096 bytes 00:16:04.549 Memory Page Size Maximum: 65536 bytes 00:16:04.549 Persistent Memory Region: Not Supported 00:16:04.549 Optional Asynchronous Events Supported 00:16:04.549 Namespace Attribute Notices: Supported 00:16:04.549 Firmware Activation Notices: Not Supported 00:16:04.549 ANA Change Notices: Not Supported 00:16:04.549 PLE Aggregate Log Change Notices: Not Supported 00:16:04.549 LBA Status Info Alert Notices: Not Supported 00:16:04.549 EGE Aggregate Log Change Notices: Not Supported 00:16:04.549 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.549 Zone Descriptor Change Notices: Not Supported 00:16:04.549 Discovery Log Change Notices: Not Supported 00:16:04.549 Controller Attributes 00:16:04.549 128-bit Host Identifier: Not Supported 00:16:04.549 Non-Operational Permissive Mode: Not Supported 00:16:04.549 NVM Sets: Not Supported 00:16:04.549 Read Recovery Levels: Not Supported 00:16:04.549 Endurance Groups: Not Supported 00:16:04.549 Predictable Latency Mode: Not Supported 00:16:04.549 Traffic Based Keep ALive: Not Supported 00:16:04.549 Namespace Granularity: Not Supported 00:16:04.549 SQ Associations: Not Supported 00:16:04.549 UUID List: Not Supported 00:16:04.549 Multi-Domain Subsystem: Not Supported 00:16:04.549 Fixed Capacity Management: Not Supported 00:16:04.549 Variable Capacity Management: Not Supported 00:16:04.549 Delete Endurance Group: Not Supported 00:16:04.549 Delete NVM Set: Not Supported 00:16:04.549 Extended LBA Formats Supported: Supported 00:16:04.549 Flexible Data Placement Supported: Not Supported 00:16:04.549 00:16:04.549 Controller Memory Buffer Support 00:16:04.549 ================================ 00:16:04.549 Supported: No 00:16:04.549 00:16:04.549 Persistent Memory Region Support 00:16:04.549 ================================ 00:16:04.549 Supported: No 00:16:04.549 00:16:04.549 Admin Command Set Attributes 00:16:04.549 ============================ 00:16:04.549 Security Send/Receive: Not Supported 00:16:04.549 Format NVM: Supported 00:16:04.549 Firmware Activate/Download: Not Supported 00:16:04.550 Namespace Management: Supported 00:16:04.550 Device Self-Test: Not Supported 00:16:04.550 Directives: Supported 00:16:04.550 NVMe-MI: Not Supported 00:16:04.550 Virtualization Management: Not Supported 00:16:04.550 Doorbell Buffer Config: Supported 00:16:04.550 Get LBA Status Capability: Not Supported 00:16:04.550 Command & Feature Lockdown Capability: Not Supported 00:16:04.550 Abort Command Limit: 4 00:16:04.550 Async Event Request Limit: 4 00:16:04.550 Number of Firmware Slots: N/A 00:16:04.550 Firmware Slot 1 Read-Only: N/A 00:16:04.550 Firmware Activation Without Reset: N/A 00:16:04.550 Multiple Update Detection Support: N/A 00:16:04.550 Firmware Update Granularity: No Information Provided 00:16:04.550 Per-Namespace SMART Log: Yes 00:16:04.550 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.550 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:16:04.550 Command Effects Log Page: Supported 00:16:04.550 Get Log Page Extended Data: Supported 00:16:04.550 Telemetry Log Pages: Not Supported 00:16:04.550 Persistent Event Log Pages: Not Supported 00:16:04.550 Supported Log Pages Log Page: May Support 00:16:04.550 Commands Supported & Effects Log Page: Not Supported 00:16:04.550 Feature Identifiers & Effects Log Page:May Support 00:16:04.550 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.550 Data Area 4 for Telemetry Log: Not Supported 00:16:04.550 Error Log Page Entries Supported: 1 00:16:04.550 Keep Alive: Not Supported 00:16:04.550 00:16:04.550 NVM Command Set Attributes 00:16:04.550 ========================== 00:16:04.550 Submission Queue Entry Size 00:16:04.550 Max: 64 00:16:04.550 Min: 64 00:16:04.550 Completion Queue Entry Size 00:16:04.550 Max: 16 00:16:04.550 Min: 16 00:16:04.550 Number of Namespaces: 256 00:16:04.550 Compare Command: Supported 00:16:04.550 Write Uncorrectable Command: Not Supported 00:16:04.550 Dataset Management Command: Supported 00:16:04.550 Write Zeroes Command: Supported 00:16:04.550 Set Features Save Field: Supported 00:16:04.550 Reservations: Not Supported 00:16:04.550 Timestamp: Supported 00:16:04.550 Copy: Supported 00:16:04.550 Volatile Write Cache: Present 00:16:04.550 Atomic Write Unit (Normal): 1 00:16:04.550 Atomic Write Unit (PFail): 1 00:16:04.550 Atomic Compare & Write Unit: 1 00:16:04.550 Fused Compare & Write: Not Supported 00:16:04.550 Scatter-Gather List 00:16:04.550 SGL Command Set: Supported 00:16:04.550 SGL Keyed: Not Supported 00:16:04.550 SGL Bit Bucket Descriptor: Not Supported 00:16:04.550 SGL Metadata Pointer: Not Supported 00:16:04.550 Oversized SGL: Not Supported 00:16:04.550 SGL Metadata Address: Not Supported 00:16:04.550 SGL Offset: Not Supported 00:16:04.550 Transport SGL Data Block: Not Supported 00:16:04.550 Replay Protected Memory Block: Not Supported 00:16:04.550 00:16:04.550 Firmware Slot Information 00:16:04.550 ========================= 00:16:04.550 Active slot: 1 00:16:04.550 Slot 1 Firmware Revision: 1.0 00:16:04.550 00:16:04.550 00:16:04.550 Commands Supported and Effects 00:16:04.550 ============================== 00:16:04.550 Admin Commands 00:16:04.550 -------------- 00:16:04.550 Delete I/O Submission Queue (00h): Supported 00:16:04.550 Create I/O Submission Queue (01h): Supported 00:16:04.550 Get Log Page (02h): Supported 00:16:04.550 Delete I/O Completion Queue (04h): Supported 00:16:04.550 Create I/O Completion Queue (05h): Supported 00:16:04.550 Identify (06h): Supported 00:16:04.550 Abort (08h): Supported 00:16:04.550 Set Features (09h): Supported 00:16:04.550 Get Features (0Ah): Supported 00:16:04.550 Asynchronous Event Request (0Ch): Supported 00:16:04.550 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:04.550 Directive Send (19h): Supported 00:16:04.550 Directive Receive (1Ah): Supported 00:16:04.550 Virtualization Management (1Ch): Supported 00:16:04.550 Doorbell Buffer Config (7Ch): Supported 00:16:04.550 Format NVM (80h): Supported LBA-Change 00:16:04.550 I/O Commands 00:16:04.550 ------------ 00:16:04.550 Flush (00h): Supported LBA-Change 00:16:04.550 Write (01h): Supported LBA-Change 00:16:04.550 Read (02h): Supported 00:16:04.550 Compare (05h): Supported 00:16:04.550 Write Zeroes (08h): Supported LBA-Change 00:16:04.550 Dataset Management (09h): Supported LBA-Change 00:16:04.550 Unknown (0Ch): Supported 00:16:04.550 Unknown (12h): Supported 00:16:04.550 Copy (19h): Supported LBA-Change 00:16:04.550 Unknown (1Dh): Supported LBA-Change 00:16:04.550 00:16:04.550 Error Log 00:16:04.550 ========= 00:16:04.550 00:16:04.550 Arbitration 00:16:04.550 =========== 00:16:04.550 Arbitration Burst: no limit 00:16:04.550 00:16:04.550 Power Management 00:16:04.550 ================ 00:16:04.550 Number of Power States: 1 00:16:04.550 Current Power State: Power State #0 00:16:04.550 Power State #0: 00:16:04.550 Max Power: 25.00 W 00:16:04.550 Non-Operational State: Operational 00:16:04.550 Entry Latency: 16 microseconds 00:16:04.550 Exit Latency: 4 microseconds 00:16:04.550 Relative Read Throughput: 0 00:16:04.550 Relative Read Latency: 0 00:16:04.550 Relative Write Throughput: 0 00:16:04.550 Relative Write Latency: 0 00:16:04.550 Idle Power: Not Reported 00:16:04.550 Active Power: Not Reported 00:16:04.550 Non-Operational Permissive Mode: Not Supported 00:16:04.550 00:16:04.550 Health Information 00:16:04.550 ================== 00:16:04.550 Critical Warnings: 00:16:04.550 Available Spare Space: OK 00:16:04.550 Temperature: OK 00:16:04.550 Device Reliability: OK 00:16:04.550 Read Only: No 00:16:04.551 Volatile Memory Backup: OK 00:16:04.551 Current Temperature: 323 Kelvin (50 Celsius) 00:16:04.551 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:04.551 Available Spare: 0% 00:16:04.551 Available Spare Threshold: 0% 00:16:04.551 Life Percentage Used: 0% 00:16:04.551 Data Units Read: 2456 00:16:04.551 Data Units Written: 2243 00:16:04.551 Host Read Commands: 112488 00:16:04.551 Host Write Commands: 110757 00:16:04.551 Controller Busy Time: 0 minutes 00:16:04.551 Power Cycles: 0 00:16:04.551 Power On Hours: 0 hours 00:16:04.551 Unsafe Shutdowns: 0 00:16:04.551 Unrecoverable Media Errors: 0 00:16:04.551 Lifetime Error Log Entries: 0 00:16:04.551 Warning Temperature Time: 0 minutes 00:16:04.551 Critical Temperature Time: 0 minutes 00:16:04.551 00:16:04.551 Number of Queues 00:16:04.551 ================ 00:16:04.551 Number of I/O Submission Queues: 64 00:16:04.551 Number of I/O Completion Queues: 64 00:16:04.551 00:16:04.551 ZNS Specific Controller Data 00:16:04.551 ============================ 00:16:04.551 Zone Append Size Limit: 0 00:16:04.551 00:16:04.551 00:16:04.551 Active Namespaces 00:16:04.551 ================= 00:16:04.551 Namespace ID:1 00:16:04.551 Error Recovery Timeout: Unlimited 00:16:04.551 Command Set Identifier: NVM (00h) 00:16:04.551 Deallocate: Supported 00:16:04.551 Deallocated/Unwritten Error: Supported 00:16:04.551 Deallocated Read Value: All 0x00 00:16:04.551 Deallocate in Write Zeroes: Not Supported 00:16:04.551 Deallocated Guard Field: 0xFFFF 00:16:04.551 Flush: Supported 00:16:04.551 Reservation: Not Supported 00:16:04.551 Namespace Sharing Capabilities: Private 00:16:04.551 Size (in LBAs): 1048576 (4GiB) 00:16:04.551 Capacity (in LBAs): 1048576 (4GiB) 00:16:04.551 Utilization (in LBAs): 1048576 (4GiB) 00:16:04.551 Thin Provisioning: Not Supported 00:16:04.551 Per-NS Atomic Units: No 00:16:04.551 Maximum Single Source Range Length: 128 00:16:04.551 Maximum Copy Length: 128 00:16:04.551 Maximum Source Range Count: 128 00:16:04.551 NGUID/EUI64 Never Reused: No 00:16:04.551 Namespace Write Protected: No 00:16:04.551 Number of LBA Formats: 8 00:16:04.551 Current LBA Format: LBA Format #04 00:16:04.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:04.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:04.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:04.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:04.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:04.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:04.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:04.551 00:16:04.551 NVM Specific Namespace Data 00:16:04.551 =========================== 00:16:04.551 Logical Block Storage Tag Mask: 0 00:16:04.551 Protection Information Capabilities: 00:16:04.551 16b Guard Protection Information Storage Tag Support: No 00:16:04.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:04.551 Storage Tag Check Read Support: No 00:16:04.551 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.551 Namespace ID:2 00:16:04.551 Error Recovery Timeout: Unlimited 00:16:04.551 Command Set Identifier: NVM (00h) 00:16:04.551 Deallocate: Supported 00:16:04.551 Deallocated/Unwritten Error: Supported 00:16:04.551 Deallocated Read Value: All 0x00 00:16:04.551 Deallocate in Write Zeroes: Not Supported 00:16:04.551 Deallocated Guard Field: 0xFFFF 00:16:04.551 Flush: Supported 00:16:04.551 Reservation: Not Supported 00:16:04.551 Namespace Sharing Capabilities: Private 00:16:04.551 Size (in LBAs): 1048576 (4GiB) 00:16:04.551 Capacity (in LBAs): 1048576 (4GiB) 00:16:04.551 Utilization (in LBAs): 1048576 (4GiB) 00:16:04.551 Thin Provisioning: Not Supported 00:16:04.551 Per-NS Atomic Units: No 00:16:04.551 Maximum Single Source Range Length: 128 00:16:04.551 Maximum Copy Length: 128 00:16:04.551 Maximum Source Range Count: 128 00:16:04.551 NGUID/EUI64 Never Reused: No 00:16:04.551 Namespace Write Protected: No 00:16:04.551 Number of LBA Formats: 8 00:16:04.551 Current LBA Format: LBA Format #04 00:16:04.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:04.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:04.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:04.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:04.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:04.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:04.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:04.551 00:16:04.551 NVM Specific Namespace Data 00:16:04.551 =========================== 00:16:04.551 Logical Block Storage Tag Mask: 0 00:16:04.551 Protection Information Capabilities: 00:16:04.551 16b Guard Protection Information Storage Tag Support: No 00:16:04.551 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:04.551 Storage Tag Check Read Support: No 00:16:04.551 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Namespace ID:3 00:16:04.552 Error Recovery Timeout: Unlimited 00:16:04.552 Command Set Identifier: NVM (00h) 00:16:04.552 Deallocate: Supported 00:16:04.552 Deallocated/Unwritten Error: Supported 00:16:04.552 Deallocated Read Value: All 0x00 00:16:04.552 Deallocate in Write Zeroes: Not Supported 00:16:04.552 Deallocated Guard Field: 0xFFFF 00:16:04.552 Flush: Supported 00:16:04.552 Reservation: Not Supported 00:16:04.552 Namespace Sharing Capabilities: Private 00:16:04.552 Size (in LBAs): 1048576 (4GiB) 00:16:04.552 Capacity (in LBAs): 1048576 (4GiB) 00:16:04.552 Utilization (in LBAs): 1048576 (4GiB) 00:16:04.552 Thin Provisioning: Not Supported 00:16:04.552 Per-NS Atomic Units: No 00:16:04.552 Maximum Single Source Range Length: 128 00:16:04.552 Maximum Copy Length: 128 00:16:04.552 Maximum Source Range Count: 128 00:16:04.552 NGUID/EUI64 Never Reused: No 00:16:04.552 Namespace Write Protected: No 00:16:04.552 Number of LBA Formats: 8 00:16:04.552 Current LBA Format: LBA Format #04 00:16:04.552 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.552 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:04.552 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:04.552 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:04.552 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:04.552 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:04.552 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:04.552 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:04.552 00:16:04.552 NVM Specific Namespace Data 00:16:04.552 =========================== 00:16:04.552 Logical Block Storage Tag Mask: 0 00:16:04.552 Protection Information Capabilities: 00:16:04.552 16b Guard Protection Information Storage Tag Support: No 00:16:04.552 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:04.552 Storage Tag Check Read Support: No 00:16:04.552 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.552 22:57:31 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:16:04.552 22:57:31 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:16:04.814 ===================================================== 00:16:04.814 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:04.814 ===================================================== 00:16:04.814 Controller Capabilities/Features 00:16:04.814 ================================ 00:16:04.814 Vendor ID: 1b36 00:16:04.814 Subsystem Vendor ID: 1af4 00:16:04.814 Serial Number: 12340 00:16:04.814 Model Number: QEMU NVMe Ctrl 00:16:04.814 Firmware Version: 8.0.0 00:16:04.814 Recommended Arb Burst: 6 00:16:04.814 IEEE OUI Identifier: 00 54 52 00:16:04.814 Multi-path I/O 00:16:04.814 May have multiple subsystem ports: No 00:16:04.814 May have multiple controllers: No 00:16:04.814 Associated with SR-IOV VF: No 00:16:04.814 Max Data Transfer Size: 524288 00:16:04.814 Max Number of Namespaces: 256 00:16:04.814 Max Number of I/O Queues: 64 00:16:04.814 NVMe Specification Version (VS): 1.4 00:16:04.814 NVMe Specification Version (Identify): 1.4 00:16:04.814 Maximum Queue Entries: 2048 00:16:04.814 Contiguous Queues Required: Yes 00:16:04.814 Arbitration Mechanisms Supported 00:16:04.814 Weighted Round Robin: Not Supported 00:16:04.814 Vendor Specific: Not Supported 00:16:04.814 Reset Timeout: 7500 ms 00:16:04.814 Doorbell Stride: 4 bytes 00:16:04.814 NVM Subsystem Reset: Not Supported 00:16:04.814 Command Sets Supported 00:16:04.814 NVM Command Set: Supported 00:16:04.814 Boot Partition: Not Supported 00:16:04.814 Memory Page Size Minimum: 4096 bytes 00:16:04.814 Memory Page Size Maximum: 65536 bytes 00:16:04.814 Persistent Memory Region: Not Supported 00:16:04.814 Optional Asynchronous Events Supported 00:16:04.814 Namespace Attribute Notices: Supported 00:16:04.814 Firmware Activation Notices: Not Supported 00:16:04.814 ANA Change Notices: Not Supported 00:16:04.814 PLE Aggregate Log Change Notices: Not Supported 00:16:04.814 LBA Status Info Alert Notices: Not Supported 00:16:04.814 EGE Aggregate Log Change Notices: Not Supported 00:16:04.814 Normal NVM Subsystem Shutdown event: Not Supported 00:16:04.814 Zone Descriptor Change Notices: Not Supported 00:16:04.814 Discovery Log Change Notices: Not Supported 00:16:04.814 Controller Attributes 00:16:04.814 128-bit Host Identifier: Not Supported 00:16:04.814 Non-Operational Permissive Mode: Not Supported 00:16:04.814 NVM Sets: Not Supported 00:16:04.814 Read Recovery Levels: Not Supported 00:16:04.814 Endurance Groups: Not Supported 00:16:04.814 Predictable Latency Mode: Not Supported 00:16:04.814 Traffic Based Keep ALive: Not Supported 00:16:04.814 Namespace Granularity: Not Supported 00:16:04.814 SQ Associations: Not Supported 00:16:04.814 UUID List: Not Supported 00:16:04.814 Multi-Domain Subsystem: Not Supported 00:16:04.814 Fixed Capacity Management: Not Supported 00:16:04.814 Variable Capacity Management: Not Supported 00:16:04.814 Delete Endurance Group: Not Supported 00:16:04.814 Delete NVM Set: Not Supported 00:16:04.814 Extended LBA Formats Supported: Supported 00:16:04.814 Flexible Data Placement Supported: Not Supported 00:16:04.814 00:16:04.814 Controller Memory Buffer Support 00:16:04.814 ================================ 00:16:04.814 Supported: No 00:16:04.814 00:16:04.814 Persistent Memory Region Support 00:16:04.814 ================================ 00:16:04.814 Supported: No 00:16:04.814 00:16:04.814 Admin Command Set Attributes 00:16:04.814 ============================ 00:16:04.814 Security Send/Receive: Not Supported 00:16:04.815 Format NVM: Supported 00:16:04.815 Firmware Activate/Download: Not Supported 00:16:04.815 Namespace Management: Supported 00:16:04.815 Device Self-Test: Not Supported 00:16:04.815 Directives: Supported 00:16:04.815 NVMe-MI: Not Supported 00:16:04.815 Virtualization Management: Not Supported 00:16:04.815 Doorbell Buffer Config: Supported 00:16:04.815 Get LBA Status Capability: Not Supported 00:16:04.815 Command & Feature Lockdown Capability: Not Supported 00:16:04.815 Abort Command Limit: 4 00:16:04.815 Async Event Request Limit: 4 00:16:04.815 Number of Firmware Slots: N/A 00:16:04.815 Firmware Slot 1 Read-Only: N/A 00:16:04.815 Firmware Activation Without Reset: N/A 00:16:04.815 Multiple Update Detection Support: N/A 00:16:04.815 Firmware Update Granularity: No Information Provided 00:16:04.815 Per-Namespace SMART Log: Yes 00:16:04.815 Asymmetric Namespace Access Log Page: Not Supported 00:16:04.815 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:16:04.815 Command Effects Log Page: Supported 00:16:04.815 Get Log Page Extended Data: Supported 00:16:04.815 Telemetry Log Pages: Not Supported 00:16:04.815 Persistent Event Log Pages: Not Supported 00:16:04.815 Supported Log Pages Log Page: May Support 00:16:04.815 Commands Supported & Effects Log Page: Not Supported 00:16:04.815 Feature Identifiers & Effects Log Page:May Support 00:16:04.815 NVMe-MI Commands & Effects Log Page: May Support 00:16:04.815 Data Area 4 for Telemetry Log: Not Supported 00:16:04.815 Error Log Page Entries Supported: 1 00:16:04.815 Keep Alive: Not Supported 00:16:04.815 00:16:04.815 NVM Command Set Attributes 00:16:04.815 ========================== 00:16:04.815 Submission Queue Entry Size 00:16:04.815 Max: 64 00:16:04.815 Min: 64 00:16:04.815 Completion Queue Entry Size 00:16:04.815 Max: 16 00:16:04.815 Min: 16 00:16:04.815 Number of Namespaces: 256 00:16:04.815 Compare Command: Supported 00:16:04.815 Write Uncorrectable Command: Not Supported 00:16:04.815 Dataset Management Command: Supported 00:16:04.815 Write Zeroes Command: Supported 00:16:04.815 Set Features Save Field: Supported 00:16:04.815 Reservations: Not Supported 00:16:04.815 Timestamp: Supported 00:16:04.815 Copy: Supported 00:16:04.815 Volatile Write Cache: Present 00:16:04.815 Atomic Write Unit (Normal): 1 00:16:04.815 Atomic Write Unit (PFail): 1 00:16:04.815 Atomic Compare & Write Unit: 1 00:16:04.815 Fused Compare & Write: Not Supported 00:16:04.815 Scatter-Gather List 00:16:04.815 SGL Command Set: Supported 00:16:04.815 SGL Keyed: Not Supported 00:16:04.815 SGL Bit Bucket Descriptor: Not Supported 00:16:04.815 SGL Metadata Pointer: Not Supported 00:16:04.815 Oversized SGL: Not Supported 00:16:04.815 SGL Metadata Address: Not Supported 00:16:04.815 SGL Offset: Not Supported 00:16:04.815 Transport SGL Data Block: Not Supported 00:16:04.815 Replay Protected Memory Block: Not Supported 00:16:04.815 00:16:04.815 Firmware Slot Information 00:16:04.815 ========================= 00:16:04.815 Active slot: 1 00:16:04.815 Slot 1 Firmware Revision: 1.0 00:16:04.815 00:16:04.815 00:16:04.815 Commands Supported and Effects 00:16:04.815 ============================== 00:16:04.815 Admin Commands 00:16:04.815 -------------- 00:16:04.815 Delete I/O Submission Queue (00h): Supported 00:16:04.815 Create I/O Submission Queue (01h): Supported 00:16:04.815 Get Log Page (02h): Supported 00:16:04.815 Delete I/O Completion Queue (04h): Supported 00:16:04.815 Create I/O Completion Queue (05h): Supported 00:16:04.815 Identify (06h): Supported 00:16:04.815 Abort (08h): Supported 00:16:04.815 Set Features (09h): Supported 00:16:04.815 Get Features (0Ah): Supported 00:16:04.815 Asynchronous Event Request (0Ch): Supported 00:16:04.815 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:04.815 Directive Send (19h): Supported 00:16:04.815 Directive Receive (1Ah): Supported 00:16:04.815 Virtualization Management (1Ch): Supported 00:16:04.815 Doorbell Buffer Config (7Ch): Supported 00:16:04.815 Format NVM (80h): Supported LBA-Change 00:16:04.815 I/O Commands 00:16:04.815 ------------ 00:16:04.815 Flush (00h): Supported LBA-Change 00:16:04.815 Write (01h): Supported LBA-Change 00:16:04.815 Read (02h): Supported 00:16:04.815 Compare (05h): Supported 00:16:04.815 Write Zeroes (08h): Supported LBA-Change 00:16:04.815 Dataset Management (09h): Supported LBA-Change 00:16:04.815 Unknown (0Ch): Supported 00:16:04.815 Unknown (12h): Supported 00:16:04.815 Copy (19h): Supported LBA-Change 00:16:04.815 Unknown (1Dh): Supported LBA-Change 00:16:04.815 00:16:04.815 Error Log 00:16:04.815 ========= 00:16:04.815 00:16:04.815 Arbitration 00:16:04.815 =========== 00:16:04.815 Arbitration Burst: no limit 00:16:04.815 00:16:04.815 Power Management 00:16:04.815 ================ 00:16:04.815 Number of Power States: 1 00:16:04.815 Current Power State: Power State #0 00:16:04.815 Power State #0: 00:16:04.815 Max Power: 25.00 W 00:16:04.815 Non-Operational State: Operational 00:16:04.815 Entry Latency: 16 microseconds 00:16:04.815 Exit Latency: 4 microseconds 00:16:04.815 Relative Read Throughput: 0 00:16:04.815 Relative Read Latency: 0 00:16:04.815 Relative Write Throughput: 0 00:16:04.815 Relative Write Latency: 0 00:16:04.815 Idle Power: Not Reported 00:16:04.815 Active Power: Not Reported 00:16:04.815 Non-Operational Permissive Mode: Not Supported 00:16:04.815 00:16:04.815 Health Information 00:16:04.815 ================== 00:16:04.815 Critical Warnings: 00:16:04.815 Available Spare Space: OK 00:16:04.815 Temperature: OK 00:16:04.815 Device Reliability: OK 00:16:04.815 Read Only: No 00:16:04.815 Volatile Memory Backup: OK 00:16:04.815 Current Temperature: 323 Kelvin (50 Celsius) 00:16:04.815 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:04.815 Available Spare: 0% 00:16:04.815 Available Spare Threshold: 0% 00:16:04.815 Life Percentage Used: 0% 00:16:04.815 Data Units Read: 766 00:16:04.815 Data Units Written: 694 00:16:04.815 Host Read Commands: 36720 00:16:04.815 Host Write Commands: 36506 00:16:04.815 Controller Busy Time: 0 minutes 00:16:04.815 Power Cycles: 0 00:16:04.815 Power On Hours: 0 hours 00:16:04.815 Unsafe Shutdowns: 0 00:16:04.815 Unrecoverable Media Errors: 0 00:16:04.815 Lifetime Error Log Entries: 0 00:16:04.815 Warning Temperature Time: 0 minutes 00:16:04.815 Critical Temperature Time: 0 minutes 00:16:04.815 00:16:04.815 Number of Queues 00:16:04.815 ================ 00:16:04.815 Number of I/O Submission Queues: 64 00:16:04.815 Number of I/O Completion Queues: 64 00:16:04.815 00:16:04.815 ZNS Specific Controller Data 00:16:04.815 ============================ 00:16:04.815 Zone Append Size Limit: 0 00:16:04.815 00:16:04.815 00:16:04.815 Active Namespaces 00:16:04.815 ================= 00:16:04.815 Namespace ID:1 00:16:04.815 Error Recovery Timeout: Unlimited 00:16:04.815 Command Set Identifier: NVM (00h) 00:16:04.815 Deallocate: Supported 00:16:04.815 Deallocated/Unwritten Error: Supported 00:16:04.815 Deallocated Read Value: All 0x00 00:16:04.815 Deallocate in Write Zeroes: Not Supported 00:16:04.815 Deallocated Guard Field: 0xFFFF 00:16:04.815 Flush: Supported 00:16:04.815 Reservation: Not Supported 00:16:04.815 Metadata Transferred as: Separate Metadata Buffer 00:16:04.815 Namespace Sharing Capabilities: Private 00:16:04.815 Size (in LBAs): 1548666 (5GiB) 00:16:04.815 Capacity (in LBAs): 1548666 (5GiB) 00:16:04.815 Utilization (in LBAs): 1548666 (5GiB) 00:16:04.815 Thin Provisioning: Not Supported 00:16:04.815 Per-NS Atomic Units: No 00:16:04.815 Maximum Single Source Range Length: 128 00:16:04.815 Maximum Copy Length: 128 00:16:04.815 Maximum Source Range Count: 128 00:16:04.815 NGUID/EUI64 Never Reused: No 00:16:04.815 Namespace Write Protected: No 00:16:04.815 Number of LBA Formats: 8 00:16:04.815 Current LBA Format: LBA Format #07 00:16:04.815 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:04.815 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:04.815 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:04.815 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:04.815 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:04.815 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:04.815 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:04.815 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:04.815 00:16:04.815 NVM Specific Namespace Data 00:16:04.815 =========================== 00:16:04.815 Logical Block Storage Tag Mask: 0 00:16:04.815 Protection Information Capabilities: 00:16:04.815 16b Guard Protection Information Storage Tag Support: No 00:16:04.815 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:04.815 Storage Tag Check Read Support: No 00:16:04.815 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.815 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.815 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.815 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.816 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.816 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.816 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.816 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:04.816 22:57:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:16:04.816 22:57:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:16:05.385 ===================================================== 00:16:05.385 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:05.385 ===================================================== 00:16:05.385 Controller Capabilities/Features 00:16:05.385 ================================ 00:16:05.385 Vendor ID: 1b36 00:16:05.385 Subsystem Vendor ID: 1af4 00:16:05.385 Serial Number: 12341 00:16:05.385 Model Number: QEMU NVMe Ctrl 00:16:05.385 Firmware Version: 8.0.0 00:16:05.385 Recommended Arb Burst: 6 00:16:05.385 IEEE OUI Identifier: 00 54 52 00:16:05.385 Multi-path I/O 00:16:05.385 May have multiple subsystem ports: No 00:16:05.385 May have multiple controllers: No 00:16:05.385 Associated with SR-IOV VF: No 00:16:05.385 Max Data Transfer Size: 524288 00:16:05.385 Max Number of Namespaces: 256 00:16:05.385 Max Number of I/O Queues: 64 00:16:05.385 NVMe Specification Version (VS): 1.4 00:16:05.385 NVMe Specification Version (Identify): 1.4 00:16:05.385 Maximum Queue Entries: 2048 00:16:05.385 Contiguous Queues Required: Yes 00:16:05.385 Arbitration Mechanisms Supported 00:16:05.385 Weighted Round Robin: Not Supported 00:16:05.385 Vendor Specific: Not Supported 00:16:05.385 Reset Timeout: 7500 ms 00:16:05.385 Doorbell Stride: 4 bytes 00:16:05.385 NVM Subsystem Reset: Not Supported 00:16:05.385 Command Sets Supported 00:16:05.385 NVM Command Set: Supported 00:16:05.385 Boot Partition: Not Supported 00:16:05.385 Memory Page Size Minimum: 4096 bytes 00:16:05.385 Memory Page Size Maximum: 65536 bytes 00:16:05.385 Persistent Memory Region: Not Supported 00:16:05.385 Optional Asynchronous Events Supported 00:16:05.385 Namespace Attribute Notices: Supported 00:16:05.385 Firmware Activation Notices: Not Supported 00:16:05.385 ANA Change Notices: Not Supported 00:16:05.385 PLE Aggregate Log Change Notices: Not Supported 00:16:05.385 LBA Status Info Alert Notices: Not Supported 00:16:05.385 EGE Aggregate Log Change Notices: Not Supported 00:16:05.385 Normal NVM Subsystem Shutdown event: Not Supported 00:16:05.385 Zone Descriptor Change Notices: Not Supported 00:16:05.385 Discovery Log Change Notices: Not Supported 00:16:05.385 Controller Attributes 00:16:05.385 128-bit Host Identifier: Not Supported 00:16:05.385 Non-Operational Permissive Mode: Not Supported 00:16:05.385 NVM Sets: Not Supported 00:16:05.385 Read Recovery Levels: Not Supported 00:16:05.385 Endurance Groups: Not Supported 00:16:05.386 Predictable Latency Mode: Not Supported 00:16:05.386 Traffic Based Keep ALive: Not Supported 00:16:05.386 Namespace Granularity: Not Supported 00:16:05.386 SQ Associations: Not Supported 00:16:05.386 UUID List: Not Supported 00:16:05.386 Multi-Domain Subsystem: Not Supported 00:16:05.386 Fixed Capacity Management: Not Supported 00:16:05.386 Variable Capacity Management: Not Supported 00:16:05.386 Delete Endurance Group: Not Supported 00:16:05.386 Delete NVM Set: Not Supported 00:16:05.386 Extended LBA Formats Supported: Supported 00:16:05.386 Flexible Data Placement Supported: Not Supported 00:16:05.386 00:16:05.386 Controller Memory Buffer Support 00:16:05.386 ================================ 00:16:05.386 Supported: No 00:16:05.386 00:16:05.386 Persistent Memory Region Support 00:16:05.386 ================================ 00:16:05.386 Supported: No 00:16:05.386 00:16:05.386 Admin Command Set Attributes 00:16:05.386 ============================ 00:16:05.386 Security Send/Receive: Not Supported 00:16:05.386 Format NVM: Supported 00:16:05.386 Firmware Activate/Download: Not Supported 00:16:05.386 Namespace Management: Supported 00:16:05.386 Device Self-Test: Not Supported 00:16:05.386 Directives: Supported 00:16:05.386 NVMe-MI: Not Supported 00:16:05.386 Virtualization Management: Not Supported 00:16:05.386 Doorbell Buffer Config: Supported 00:16:05.386 Get LBA Status Capability: Not Supported 00:16:05.386 Command & Feature Lockdown Capability: Not Supported 00:16:05.386 Abort Command Limit: 4 00:16:05.386 Async Event Request Limit: 4 00:16:05.386 Number of Firmware Slots: N/A 00:16:05.386 Firmware Slot 1 Read-Only: N/A 00:16:05.386 Firmware Activation Without Reset: N/A 00:16:05.386 Multiple Update Detection Support: N/A 00:16:05.386 Firmware Update Granularity: No Information Provided 00:16:05.386 Per-Namespace SMART Log: Yes 00:16:05.386 Asymmetric Namespace Access Log Page: Not Supported 00:16:05.386 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:16:05.386 Command Effects Log Page: Supported 00:16:05.386 Get Log Page Extended Data: Supported 00:16:05.386 Telemetry Log Pages: Not Supported 00:16:05.386 Persistent Event Log Pages: Not Supported 00:16:05.386 Supported Log Pages Log Page: May Support 00:16:05.386 Commands Supported & Effects Log Page: Not Supported 00:16:05.386 Feature Identifiers & Effects Log Page:May Support 00:16:05.386 NVMe-MI Commands & Effects Log Page: May Support 00:16:05.386 Data Area 4 for Telemetry Log: Not Supported 00:16:05.386 Error Log Page Entries Supported: 1 00:16:05.386 Keep Alive: Not Supported 00:16:05.386 00:16:05.386 NVM Command Set Attributes 00:16:05.386 ========================== 00:16:05.386 Submission Queue Entry Size 00:16:05.386 Max: 64 00:16:05.386 Min: 64 00:16:05.386 Completion Queue Entry Size 00:16:05.386 Max: 16 00:16:05.386 Min: 16 00:16:05.386 Number of Namespaces: 256 00:16:05.386 Compare Command: Supported 00:16:05.386 Write Uncorrectable Command: Not Supported 00:16:05.386 Dataset Management Command: Supported 00:16:05.386 Write Zeroes Command: Supported 00:16:05.386 Set Features Save Field: Supported 00:16:05.386 Reservations: Not Supported 00:16:05.386 Timestamp: Supported 00:16:05.386 Copy: Supported 00:16:05.386 Volatile Write Cache: Present 00:16:05.386 Atomic Write Unit (Normal): 1 00:16:05.386 Atomic Write Unit (PFail): 1 00:16:05.386 Atomic Compare & Write Unit: 1 00:16:05.386 Fused Compare & Write: Not Supported 00:16:05.386 Scatter-Gather List 00:16:05.386 SGL Command Set: Supported 00:16:05.386 SGL Keyed: Not Supported 00:16:05.386 SGL Bit Bucket Descriptor: Not Supported 00:16:05.386 SGL Metadata Pointer: Not Supported 00:16:05.386 Oversized SGL: Not Supported 00:16:05.386 SGL Metadata Address: Not Supported 00:16:05.386 SGL Offset: Not Supported 00:16:05.386 Transport SGL Data Block: Not Supported 00:16:05.386 Replay Protected Memory Block: Not Supported 00:16:05.386 00:16:05.386 Firmware Slot Information 00:16:05.386 ========================= 00:16:05.386 Active slot: 1 00:16:05.386 Slot 1 Firmware Revision: 1.0 00:16:05.386 00:16:05.386 00:16:05.386 Commands Supported and Effects 00:16:05.386 ============================== 00:16:05.386 Admin Commands 00:16:05.386 -------------- 00:16:05.386 Delete I/O Submission Queue (00h): Supported 00:16:05.386 Create I/O Submission Queue (01h): Supported 00:16:05.386 Get Log Page (02h): Supported 00:16:05.386 Delete I/O Completion Queue (04h): Supported 00:16:05.386 Create I/O Completion Queue (05h): Supported 00:16:05.386 Identify (06h): Supported 00:16:05.386 Abort (08h): Supported 00:16:05.386 Set Features (09h): Supported 00:16:05.386 Get Features (0Ah): Supported 00:16:05.386 Asynchronous Event Request (0Ch): Supported 00:16:05.386 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:05.386 Directive Send (19h): Supported 00:16:05.386 Directive Receive (1Ah): Supported 00:16:05.386 Virtualization Management (1Ch): Supported 00:16:05.386 Doorbell Buffer Config (7Ch): Supported 00:16:05.386 Format NVM (80h): Supported LBA-Change 00:16:05.386 I/O Commands 00:16:05.386 ------------ 00:16:05.386 Flush (00h): Supported LBA-Change 00:16:05.386 Write (01h): Supported LBA-Change 00:16:05.386 Read (02h): Supported 00:16:05.386 Compare (05h): Supported 00:16:05.386 Write Zeroes (08h): Supported LBA-Change 00:16:05.386 Dataset Management (09h): Supported LBA-Change 00:16:05.386 Unknown (0Ch): Supported 00:16:05.386 Unknown (12h): Supported 00:16:05.386 Copy (19h): Supported LBA-Change 00:16:05.386 Unknown (1Dh): Supported LBA-Change 00:16:05.386 00:16:05.386 Error Log 00:16:05.386 ========= 00:16:05.386 00:16:05.386 Arbitration 00:16:05.386 =========== 00:16:05.386 Arbitration Burst: no limit 00:16:05.386 00:16:05.386 Power Management 00:16:05.386 ================ 00:16:05.386 Number of Power States: 1 00:16:05.386 Current Power State: Power State #0 00:16:05.386 Power State #0: 00:16:05.386 Max Power: 25.00 W 00:16:05.386 Non-Operational State: Operational 00:16:05.386 Entry Latency: 16 microseconds 00:16:05.386 Exit Latency: 4 microseconds 00:16:05.386 Relative Read Throughput: 0 00:16:05.386 Relative Read Latency: 0 00:16:05.386 Relative Write Throughput: 0 00:16:05.386 Relative Write Latency: 0 00:16:05.386 Idle Power: Not Reported 00:16:05.386 Active Power: Not Reported 00:16:05.386 Non-Operational Permissive Mode: Not Supported 00:16:05.386 00:16:05.386 Health Information 00:16:05.386 ================== 00:16:05.386 Critical Warnings: 00:16:05.386 Available Spare Space: OK 00:16:05.386 Temperature: OK 00:16:05.386 Device Reliability: OK 00:16:05.386 Read Only: No 00:16:05.386 Volatile Memory Backup: OK 00:16:05.386 Current Temperature: 323 Kelvin (50 Celsius) 00:16:05.386 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:05.386 Available Spare: 0% 00:16:05.386 Available Spare Threshold: 0% 00:16:05.386 Life Percentage Used: 0% 00:16:05.386 Data Units Read: 1189 00:16:05.386 Data Units Written: 1055 00:16:05.386 Host Read Commands: 55182 00:16:05.386 Host Write Commands: 53975 00:16:05.386 Controller Busy Time: 0 minutes 00:16:05.386 Power Cycles: 0 00:16:05.386 Power On Hours: 0 hours 00:16:05.386 Unsafe Shutdowns: 0 00:16:05.386 Unrecoverable Media Errors: 0 00:16:05.386 Lifetime Error Log Entries: 0 00:16:05.386 Warning Temperature Time: 0 minutes 00:16:05.386 Critical Temperature Time: 0 minutes 00:16:05.386 00:16:05.386 Number of Queues 00:16:05.386 ================ 00:16:05.386 Number of I/O Submission Queues: 64 00:16:05.386 Number of I/O Completion Queues: 64 00:16:05.386 00:16:05.386 ZNS Specific Controller Data 00:16:05.386 ============================ 00:16:05.386 Zone Append Size Limit: 0 00:16:05.386 00:16:05.386 00:16:05.386 Active Namespaces 00:16:05.386 ================= 00:16:05.386 Namespace ID:1 00:16:05.387 Error Recovery Timeout: Unlimited 00:16:05.387 Command Set Identifier: NVM (00h) 00:16:05.387 Deallocate: Supported 00:16:05.387 Deallocated/Unwritten Error: Supported 00:16:05.387 Deallocated Read Value: All 0x00 00:16:05.387 Deallocate in Write Zeroes: Not Supported 00:16:05.387 Deallocated Guard Field: 0xFFFF 00:16:05.387 Flush: Supported 00:16:05.387 Reservation: Not Supported 00:16:05.387 Namespace Sharing Capabilities: Private 00:16:05.387 Size (in LBAs): 1310720 (5GiB) 00:16:05.387 Capacity (in LBAs): 1310720 (5GiB) 00:16:05.387 Utilization (in LBAs): 1310720 (5GiB) 00:16:05.387 Thin Provisioning: Not Supported 00:16:05.387 Per-NS Atomic Units: No 00:16:05.387 Maximum Single Source Range Length: 128 00:16:05.387 Maximum Copy Length: 128 00:16:05.387 Maximum Source Range Count: 128 00:16:05.387 NGUID/EUI64 Never Reused: No 00:16:05.387 Namespace Write Protected: No 00:16:05.387 Number of LBA Formats: 8 00:16:05.387 Current LBA Format: LBA Format #04 00:16:05.387 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:05.387 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:05.387 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:05.387 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:05.387 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:05.387 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:05.387 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:05.387 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:05.387 00:16:05.387 NVM Specific Namespace Data 00:16:05.387 =========================== 00:16:05.387 Logical Block Storage Tag Mask: 0 00:16:05.387 Protection Information Capabilities: 00:16:05.387 16b Guard Protection Information Storage Tag Support: No 00:16:05.387 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:05.387 Storage Tag Check Read Support: No 00:16:05.387 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.387 22:57:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:16:05.387 22:57:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:16:05.647 ===================================================== 00:16:05.647 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:05.647 ===================================================== 00:16:05.647 Controller Capabilities/Features 00:16:05.647 ================================ 00:16:05.647 Vendor ID: 1b36 00:16:05.647 Subsystem Vendor ID: 1af4 00:16:05.647 Serial Number: 12342 00:16:05.647 Model Number: QEMU NVMe Ctrl 00:16:05.647 Firmware Version: 8.0.0 00:16:05.647 Recommended Arb Burst: 6 00:16:05.647 IEEE OUI Identifier: 00 54 52 00:16:05.647 Multi-path I/O 00:16:05.647 May have multiple subsystem ports: No 00:16:05.647 May have multiple controllers: No 00:16:05.647 Associated with SR-IOV VF: No 00:16:05.647 Max Data Transfer Size: 524288 00:16:05.647 Max Number of Namespaces: 256 00:16:05.647 Max Number of I/O Queues: 64 00:16:05.647 NVMe Specification Version (VS): 1.4 00:16:05.647 NVMe Specification Version (Identify): 1.4 00:16:05.647 Maximum Queue Entries: 2048 00:16:05.647 Contiguous Queues Required: Yes 00:16:05.647 Arbitration Mechanisms Supported 00:16:05.647 Weighted Round Robin: Not Supported 00:16:05.647 Vendor Specific: Not Supported 00:16:05.647 Reset Timeout: 7500 ms 00:16:05.647 Doorbell Stride: 4 bytes 00:16:05.647 NVM Subsystem Reset: Not Supported 00:16:05.647 Command Sets Supported 00:16:05.647 NVM Command Set: Supported 00:16:05.647 Boot Partition: Not Supported 00:16:05.647 Memory Page Size Minimum: 4096 bytes 00:16:05.647 Memory Page Size Maximum: 65536 bytes 00:16:05.647 Persistent Memory Region: Not Supported 00:16:05.647 Optional Asynchronous Events Supported 00:16:05.647 Namespace Attribute Notices: Supported 00:16:05.647 Firmware Activation Notices: Not Supported 00:16:05.647 ANA Change Notices: Not Supported 00:16:05.647 PLE Aggregate Log Change Notices: Not Supported 00:16:05.647 LBA Status Info Alert Notices: Not Supported 00:16:05.647 EGE Aggregate Log Change Notices: Not Supported 00:16:05.647 Normal NVM Subsystem Shutdown event: Not Supported 00:16:05.647 Zone Descriptor Change Notices: Not Supported 00:16:05.647 Discovery Log Change Notices: Not Supported 00:16:05.647 Controller Attributes 00:16:05.647 128-bit Host Identifier: Not Supported 00:16:05.647 Non-Operational Permissive Mode: Not Supported 00:16:05.647 NVM Sets: Not Supported 00:16:05.647 Read Recovery Levels: Not Supported 00:16:05.647 Endurance Groups: Not Supported 00:16:05.647 Predictable Latency Mode: Not Supported 00:16:05.647 Traffic Based Keep ALive: Not Supported 00:16:05.647 Namespace Granularity: Not Supported 00:16:05.647 SQ Associations: Not Supported 00:16:05.647 UUID List: Not Supported 00:16:05.647 Multi-Domain Subsystem: Not Supported 00:16:05.647 Fixed Capacity Management: Not Supported 00:16:05.647 Variable Capacity Management: Not Supported 00:16:05.647 Delete Endurance Group: Not Supported 00:16:05.647 Delete NVM Set: Not Supported 00:16:05.647 Extended LBA Formats Supported: Supported 00:16:05.647 Flexible Data Placement Supported: Not Supported 00:16:05.647 00:16:05.647 Controller Memory Buffer Support 00:16:05.647 ================================ 00:16:05.647 Supported: No 00:16:05.647 00:16:05.647 Persistent Memory Region Support 00:16:05.647 ================================ 00:16:05.647 Supported: No 00:16:05.647 00:16:05.647 Admin Command Set Attributes 00:16:05.647 ============================ 00:16:05.647 Security Send/Receive: Not Supported 00:16:05.647 Format NVM: Supported 00:16:05.647 Firmware Activate/Download: Not Supported 00:16:05.647 Namespace Management: Supported 00:16:05.647 Device Self-Test: Not Supported 00:16:05.647 Directives: Supported 00:16:05.647 NVMe-MI: Not Supported 00:16:05.647 Virtualization Management: Not Supported 00:16:05.647 Doorbell Buffer Config: Supported 00:16:05.647 Get LBA Status Capability: Not Supported 00:16:05.647 Command & Feature Lockdown Capability: Not Supported 00:16:05.647 Abort Command Limit: 4 00:16:05.647 Async Event Request Limit: 4 00:16:05.647 Number of Firmware Slots: N/A 00:16:05.647 Firmware Slot 1 Read-Only: N/A 00:16:05.647 Firmware Activation Without Reset: N/A 00:16:05.647 Multiple Update Detection Support: N/A 00:16:05.647 Firmware Update Granularity: No Information Provided 00:16:05.647 Per-Namespace SMART Log: Yes 00:16:05.647 Asymmetric Namespace Access Log Page: Not Supported 00:16:05.647 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:16:05.647 Command Effects Log Page: Supported 00:16:05.647 Get Log Page Extended Data: Supported 00:16:05.647 Telemetry Log Pages: Not Supported 00:16:05.647 Persistent Event Log Pages: Not Supported 00:16:05.647 Supported Log Pages Log Page: May Support 00:16:05.647 Commands Supported & Effects Log Page: Not Supported 00:16:05.647 Feature Identifiers & Effects Log Page:May Support 00:16:05.647 NVMe-MI Commands & Effects Log Page: May Support 00:16:05.647 Data Area 4 for Telemetry Log: Not Supported 00:16:05.647 Error Log Page Entries Supported: 1 00:16:05.647 Keep Alive: Not Supported 00:16:05.647 00:16:05.647 NVM Command Set Attributes 00:16:05.647 ========================== 00:16:05.647 Submission Queue Entry Size 00:16:05.647 Max: 64 00:16:05.647 Min: 64 00:16:05.647 Completion Queue Entry Size 00:16:05.647 Max: 16 00:16:05.647 Min: 16 00:16:05.647 Number of Namespaces: 256 00:16:05.647 Compare Command: Supported 00:16:05.647 Write Uncorrectable Command: Not Supported 00:16:05.647 Dataset Management Command: Supported 00:16:05.647 Write Zeroes Command: Supported 00:16:05.647 Set Features Save Field: Supported 00:16:05.647 Reservations: Not Supported 00:16:05.647 Timestamp: Supported 00:16:05.647 Copy: Supported 00:16:05.647 Volatile Write Cache: Present 00:16:05.647 Atomic Write Unit (Normal): 1 00:16:05.647 Atomic Write Unit (PFail): 1 00:16:05.647 Atomic Compare & Write Unit: 1 00:16:05.647 Fused Compare & Write: Not Supported 00:16:05.647 Scatter-Gather List 00:16:05.647 SGL Command Set: Supported 00:16:05.647 SGL Keyed: Not Supported 00:16:05.647 SGL Bit Bucket Descriptor: Not Supported 00:16:05.647 SGL Metadata Pointer: Not Supported 00:16:05.647 Oversized SGL: Not Supported 00:16:05.647 SGL Metadata Address: Not Supported 00:16:05.647 SGL Offset: Not Supported 00:16:05.647 Transport SGL Data Block: Not Supported 00:16:05.647 Replay Protected Memory Block: Not Supported 00:16:05.647 00:16:05.647 Firmware Slot Information 00:16:05.647 ========================= 00:16:05.647 Active slot: 1 00:16:05.647 Slot 1 Firmware Revision: 1.0 00:16:05.647 00:16:05.647 00:16:05.647 Commands Supported and Effects 00:16:05.647 ============================== 00:16:05.647 Admin Commands 00:16:05.647 -------------- 00:16:05.647 Delete I/O Submission Queue (00h): Supported 00:16:05.647 Create I/O Submission Queue (01h): Supported 00:16:05.647 Get Log Page (02h): Supported 00:16:05.647 Delete I/O Completion Queue (04h): Supported 00:16:05.647 Create I/O Completion Queue (05h): Supported 00:16:05.647 Identify (06h): Supported 00:16:05.647 Abort (08h): Supported 00:16:05.647 Set Features (09h): Supported 00:16:05.647 Get Features (0Ah): Supported 00:16:05.647 Asynchronous Event Request (0Ch): Supported 00:16:05.647 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:05.647 Directive Send (19h): Supported 00:16:05.647 Directive Receive (1Ah): Supported 00:16:05.647 Virtualization Management (1Ch): Supported 00:16:05.648 Doorbell Buffer Config (7Ch): Supported 00:16:05.648 Format NVM (80h): Supported LBA-Change 00:16:05.648 I/O Commands 00:16:05.648 ------------ 00:16:05.648 Flush (00h): Supported LBA-Change 00:16:05.648 Write (01h): Supported LBA-Change 00:16:05.648 Read (02h): Supported 00:16:05.648 Compare (05h): Supported 00:16:05.648 Write Zeroes (08h): Supported LBA-Change 00:16:05.648 Dataset Management (09h): Supported LBA-Change 00:16:05.648 Unknown (0Ch): Supported 00:16:05.648 Unknown (12h): Supported 00:16:05.648 Copy (19h): Supported LBA-Change 00:16:05.648 Unknown (1Dh): Supported LBA-Change 00:16:05.648 00:16:05.648 Error Log 00:16:05.648 ========= 00:16:05.648 00:16:05.648 Arbitration 00:16:05.648 =========== 00:16:05.648 Arbitration Burst: no limit 00:16:05.648 00:16:05.648 Power Management 00:16:05.648 ================ 00:16:05.648 Number of Power States: 1 00:16:05.648 Current Power State: Power State #0 00:16:05.648 Power State #0: 00:16:05.648 Max Power: 25.00 W 00:16:05.648 Non-Operational State: Operational 00:16:05.648 Entry Latency: 16 microseconds 00:16:05.648 Exit Latency: 4 microseconds 00:16:05.648 Relative Read Throughput: 0 00:16:05.648 Relative Read Latency: 0 00:16:05.648 Relative Write Throughput: 0 00:16:05.648 Relative Write Latency: 0 00:16:05.648 Idle Power: Not Reported 00:16:05.648 Active Power: Not Reported 00:16:05.648 Non-Operational Permissive Mode: Not Supported 00:16:05.648 00:16:05.648 Health Information 00:16:05.648 ================== 00:16:05.648 Critical Warnings: 00:16:05.648 Available Spare Space: OK 00:16:05.648 Temperature: OK 00:16:05.648 Device Reliability: OK 00:16:05.648 Read Only: No 00:16:05.648 Volatile Memory Backup: OK 00:16:05.648 Current Temperature: 323 Kelvin (50 Celsius) 00:16:05.648 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:05.648 Available Spare: 0% 00:16:05.648 Available Spare Threshold: 0% 00:16:05.648 Life Percentage Used: 0% 00:16:05.648 Data Units Read: 2456 00:16:05.648 Data Units Written: 2243 00:16:05.648 Host Read Commands: 112488 00:16:05.648 Host Write Commands: 110757 00:16:05.648 Controller Busy Time: 0 minutes 00:16:05.648 Power Cycles: 0 00:16:05.648 Power On Hours: 0 hours 00:16:05.648 Unsafe Shutdowns: 0 00:16:05.648 Unrecoverable Media Errors: 0 00:16:05.648 Lifetime Error Log Entries: 0 00:16:05.648 Warning Temperature Time: 0 minutes 00:16:05.648 Critical Temperature Time: 0 minutes 00:16:05.648 00:16:05.648 Number of Queues 00:16:05.648 ================ 00:16:05.648 Number of I/O Submission Queues: 64 00:16:05.648 Number of I/O Completion Queues: 64 00:16:05.648 00:16:05.648 ZNS Specific Controller Data 00:16:05.648 ============================ 00:16:05.648 Zone Append Size Limit: 0 00:16:05.648 00:16:05.648 00:16:05.648 Active Namespaces 00:16:05.648 ================= 00:16:05.648 Namespace ID:1 00:16:05.648 Error Recovery Timeout: Unlimited 00:16:05.648 Command Set Identifier: NVM (00h) 00:16:05.648 Deallocate: Supported 00:16:05.648 Deallocated/Unwritten Error: Supported 00:16:05.648 Deallocated Read Value: All 0x00 00:16:05.648 Deallocate in Write Zeroes: Not Supported 00:16:05.648 Deallocated Guard Field: 0xFFFF 00:16:05.648 Flush: Supported 00:16:05.648 Reservation: Not Supported 00:16:05.648 Namespace Sharing Capabilities: Private 00:16:05.648 Size (in LBAs): 1048576 (4GiB) 00:16:05.648 Capacity (in LBAs): 1048576 (4GiB) 00:16:05.648 Utilization (in LBAs): 1048576 (4GiB) 00:16:05.648 Thin Provisioning: Not Supported 00:16:05.648 Per-NS Atomic Units: No 00:16:05.648 Maximum Single Source Range Length: 128 00:16:05.648 Maximum Copy Length: 128 00:16:05.648 Maximum Source Range Count: 128 00:16:05.648 NGUID/EUI64 Never Reused: No 00:16:05.648 Namespace Write Protected: No 00:16:05.648 Number of LBA Formats: 8 00:16:05.648 Current LBA Format: LBA Format #04 00:16:05.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:05.648 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:05.648 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:05.648 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:05.648 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:05.648 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:05.648 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:05.648 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:05.648 00:16:05.648 NVM Specific Namespace Data 00:16:05.648 =========================== 00:16:05.648 Logical Block Storage Tag Mask: 0 00:16:05.648 Protection Information Capabilities: 00:16:05.648 16b Guard Protection Information Storage Tag Support: No 00:16:05.648 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:05.648 Storage Tag Check Read Support: No 00:16:05.648 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Namespace ID:2 00:16:05.648 Error Recovery Timeout: Unlimited 00:16:05.648 Command Set Identifier: NVM (00h) 00:16:05.648 Deallocate: Supported 00:16:05.648 Deallocated/Unwritten Error: Supported 00:16:05.648 Deallocated Read Value: All 0x00 00:16:05.648 Deallocate in Write Zeroes: Not Supported 00:16:05.648 Deallocated Guard Field: 0xFFFF 00:16:05.648 Flush: Supported 00:16:05.648 Reservation: Not Supported 00:16:05.648 Namespace Sharing Capabilities: Private 00:16:05.648 Size (in LBAs): 1048576 (4GiB) 00:16:05.648 Capacity (in LBAs): 1048576 (4GiB) 00:16:05.648 Utilization (in LBAs): 1048576 (4GiB) 00:16:05.648 Thin Provisioning: Not Supported 00:16:05.648 Per-NS Atomic Units: No 00:16:05.648 Maximum Single Source Range Length: 128 00:16:05.648 Maximum Copy Length: 128 00:16:05.648 Maximum Source Range Count: 128 00:16:05.648 NGUID/EUI64 Never Reused: No 00:16:05.648 Namespace Write Protected: No 00:16:05.648 Number of LBA Formats: 8 00:16:05.648 Current LBA Format: LBA Format #04 00:16:05.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:05.648 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:05.648 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:05.648 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:05.648 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:05.648 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:05.648 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:05.648 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:05.648 00:16:05.648 NVM Specific Namespace Data 00:16:05.648 =========================== 00:16:05.648 Logical Block Storage Tag Mask: 0 00:16:05.648 Protection Information Capabilities: 00:16:05.648 16b Guard Protection Information Storage Tag Support: No 00:16:05.648 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:05.648 Storage Tag Check Read Support: No 00:16:05.648 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.648 Namespace ID:3 00:16:05.648 Error Recovery Timeout: Unlimited 00:16:05.648 Command Set Identifier: NVM (00h) 00:16:05.648 Deallocate: Supported 00:16:05.648 Deallocated/Unwritten Error: Supported 00:16:05.648 Deallocated Read Value: All 0x00 00:16:05.648 Deallocate in Write Zeroes: Not Supported 00:16:05.648 Deallocated Guard Field: 0xFFFF 00:16:05.648 Flush: Supported 00:16:05.648 Reservation: Not Supported 00:16:05.648 Namespace Sharing Capabilities: Private 00:16:05.648 Size (in LBAs): 1048576 (4GiB) 00:16:05.648 Capacity (in LBAs): 1048576 (4GiB) 00:16:05.648 Utilization (in LBAs): 1048576 (4GiB) 00:16:05.648 Thin Provisioning: Not Supported 00:16:05.648 Per-NS Atomic Units: No 00:16:05.648 Maximum Single Source Range Length: 128 00:16:05.648 Maximum Copy Length: 128 00:16:05.648 Maximum Source Range Count: 128 00:16:05.648 NGUID/EUI64 Never Reused: No 00:16:05.648 Namespace Write Protected: No 00:16:05.648 Number of LBA Formats: 8 00:16:05.648 Current LBA Format: LBA Format #04 00:16:05.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:05.648 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:05.649 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:05.649 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:05.649 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:05.649 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:05.649 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:05.649 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:05.649 00:16:05.649 NVM Specific Namespace Data 00:16:05.649 =========================== 00:16:05.649 Logical Block Storage Tag Mask: 0 00:16:05.649 Protection Information Capabilities: 00:16:05.649 16b Guard Protection Information Storage Tag Support: No 00:16:05.649 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:05.649 Storage Tag Check Read Support: No 00:16:05.649 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.649 22:57:32 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:16:05.649 22:57:32 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:16:05.909 ===================================================== 00:16:05.909 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:05.909 ===================================================== 00:16:05.909 Controller Capabilities/Features 00:16:05.909 ================================ 00:16:05.909 Vendor ID: 1b36 00:16:05.909 Subsystem Vendor ID: 1af4 00:16:05.909 Serial Number: 12343 00:16:05.909 Model Number: QEMU NVMe Ctrl 00:16:05.909 Firmware Version: 8.0.0 00:16:05.909 Recommended Arb Burst: 6 00:16:05.909 IEEE OUI Identifier: 00 54 52 00:16:05.909 Multi-path I/O 00:16:05.909 May have multiple subsystem ports: No 00:16:05.909 May have multiple controllers: Yes 00:16:05.909 Associated with SR-IOV VF: No 00:16:05.909 Max Data Transfer Size: 524288 00:16:05.909 Max Number of Namespaces: 256 00:16:05.909 Max Number of I/O Queues: 64 00:16:05.909 NVMe Specification Version (VS): 1.4 00:16:05.909 NVMe Specification Version (Identify): 1.4 00:16:05.909 Maximum Queue Entries: 2048 00:16:05.909 Contiguous Queues Required: Yes 00:16:05.909 Arbitration Mechanisms Supported 00:16:05.909 Weighted Round Robin: Not Supported 00:16:05.909 Vendor Specific: Not Supported 00:16:05.909 Reset Timeout: 7500 ms 00:16:05.909 Doorbell Stride: 4 bytes 00:16:05.909 NVM Subsystem Reset: Not Supported 00:16:05.909 Command Sets Supported 00:16:05.909 NVM Command Set: Supported 00:16:05.909 Boot Partition: Not Supported 00:16:05.909 Memory Page Size Minimum: 4096 bytes 00:16:05.909 Memory Page Size Maximum: 65536 bytes 00:16:05.909 Persistent Memory Region: Not Supported 00:16:05.909 Optional Asynchronous Events Supported 00:16:05.909 Namespace Attribute Notices: Supported 00:16:05.909 Firmware Activation Notices: Not Supported 00:16:05.909 ANA Change Notices: Not Supported 00:16:05.909 PLE Aggregate Log Change Notices: Not Supported 00:16:05.909 LBA Status Info Alert Notices: Not Supported 00:16:05.909 EGE Aggregate Log Change Notices: Not Supported 00:16:05.909 Normal NVM Subsystem Shutdown event: Not Supported 00:16:05.909 Zone Descriptor Change Notices: Not Supported 00:16:05.909 Discovery Log Change Notices: Not Supported 00:16:05.909 Controller Attributes 00:16:05.909 128-bit Host Identifier: Not Supported 00:16:05.909 Non-Operational Permissive Mode: Not Supported 00:16:05.909 NVM Sets: Not Supported 00:16:05.909 Read Recovery Levels: Not Supported 00:16:05.909 Endurance Groups: Supported 00:16:05.909 Predictable Latency Mode: Not Supported 00:16:05.909 Traffic Based Keep ALive: Not Supported 00:16:05.909 Namespace Granularity: Not Supported 00:16:05.909 SQ Associations: Not Supported 00:16:05.909 UUID List: Not Supported 00:16:05.909 Multi-Domain Subsystem: Not Supported 00:16:05.909 Fixed Capacity Management: Not Supported 00:16:05.909 Variable Capacity Management: Not Supported 00:16:05.909 Delete Endurance Group: Not Supported 00:16:05.909 Delete NVM Set: Not Supported 00:16:05.909 Extended LBA Formats Supported: Supported 00:16:05.909 Flexible Data Placement Supported: Supported 00:16:05.909 00:16:05.909 Controller Memory Buffer Support 00:16:05.909 ================================ 00:16:05.909 Supported: No 00:16:05.909 00:16:05.909 Persistent Memory Region Support 00:16:05.909 ================================ 00:16:05.909 Supported: No 00:16:05.909 00:16:05.909 Admin Command Set Attributes 00:16:05.909 ============================ 00:16:05.909 Security Send/Receive: Not Supported 00:16:05.909 Format NVM: Supported 00:16:05.909 Firmware Activate/Download: Not Supported 00:16:05.909 Namespace Management: Supported 00:16:05.909 Device Self-Test: Not Supported 00:16:05.909 Directives: Supported 00:16:05.909 NVMe-MI: Not Supported 00:16:05.909 Virtualization Management: Not Supported 00:16:05.909 Doorbell Buffer Config: Supported 00:16:05.909 Get LBA Status Capability: Not Supported 00:16:05.909 Command & Feature Lockdown Capability: Not Supported 00:16:05.909 Abort Command Limit: 4 00:16:05.909 Async Event Request Limit: 4 00:16:05.909 Number of Firmware Slots: N/A 00:16:05.909 Firmware Slot 1 Read-Only: N/A 00:16:05.909 Firmware Activation Without Reset: N/A 00:16:05.909 Multiple Update Detection Support: N/A 00:16:05.909 Firmware Update Granularity: No Information Provided 00:16:05.909 Per-Namespace SMART Log: Yes 00:16:05.909 Asymmetric Namespace Access Log Page: Not Supported 00:16:05.909 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:16:05.909 Command Effects Log Page: Supported 00:16:05.909 Get Log Page Extended Data: Supported 00:16:05.909 Telemetry Log Pages: Not Supported 00:16:05.909 Persistent Event Log Pages: Not Supported 00:16:05.909 Supported Log Pages Log Page: May Support 00:16:05.909 Commands Supported & Effects Log Page: Not Supported 00:16:05.909 Feature Identifiers & Effects Log Page:May Support 00:16:05.909 NVMe-MI Commands & Effects Log Page: May Support 00:16:05.909 Data Area 4 for Telemetry Log: Not Supported 00:16:05.909 Error Log Page Entries Supported: 1 00:16:05.909 Keep Alive: Not Supported 00:16:05.909 00:16:05.909 NVM Command Set Attributes 00:16:05.909 ========================== 00:16:05.909 Submission Queue Entry Size 00:16:05.909 Max: 64 00:16:05.909 Min: 64 00:16:05.909 Completion Queue Entry Size 00:16:05.909 Max: 16 00:16:05.909 Min: 16 00:16:05.909 Number of Namespaces: 256 00:16:05.909 Compare Command: Supported 00:16:05.909 Write Uncorrectable Command: Not Supported 00:16:05.909 Dataset Management Command: Supported 00:16:05.909 Write Zeroes Command: Supported 00:16:05.909 Set Features Save Field: Supported 00:16:05.909 Reservations: Not Supported 00:16:05.909 Timestamp: Supported 00:16:05.909 Copy: Supported 00:16:05.909 Volatile Write Cache: Present 00:16:05.909 Atomic Write Unit (Normal): 1 00:16:05.909 Atomic Write Unit (PFail): 1 00:16:05.909 Atomic Compare & Write Unit: 1 00:16:05.909 Fused Compare & Write: Not Supported 00:16:05.909 Scatter-Gather List 00:16:05.909 SGL Command Set: Supported 00:16:05.909 SGL Keyed: Not Supported 00:16:05.909 SGL Bit Bucket Descriptor: Not Supported 00:16:05.910 SGL Metadata Pointer: Not Supported 00:16:05.910 Oversized SGL: Not Supported 00:16:05.910 SGL Metadata Address: Not Supported 00:16:05.910 SGL Offset: Not Supported 00:16:05.910 Transport SGL Data Block: Not Supported 00:16:05.910 Replay Protected Memory Block: Not Supported 00:16:05.910 00:16:05.910 Firmware Slot Information 00:16:05.910 ========================= 00:16:05.910 Active slot: 1 00:16:05.910 Slot 1 Firmware Revision: 1.0 00:16:05.910 00:16:05.910 00:16:05.910 Commands Supported and Effects 00:16:05.910 ============================== 00:16:05.910 Admin Commands 00:16:05.910 -------------- 00:16:05.910 Delete I/O Submission Queue (00h): Supported 00:16:05.910 Create I/O Submission Queue (01h): Supported 00:16:05.910 Get Log Page (02h): Supported 00:16:05.910 Delete I/O Completion Queue (04h): Supported 00:16:05.910 Create I/O Completion Queue (05h): Supported 00:16:05.910 Identify (06h): Supported 00:16:05.910 Abort (08h): Supported 00:16:05.910 Set Features (09h): Supported 00:16:05.910 Get Features (0Ah): Supported 00:16:05.910 Asynchronous Event Request (0Ch): Supported 00:16:05.910 Namespace Attachment (15h): Supported NS-Inventory-Change 00:16:05.910 Directive Send (19h): Supported 00:16:05.910 Directive Receive (1Ah): Supported 00:16:05.910 Virtualization Management (1Ch): Supported 00:16:05.910 Doorbell Buffer Config (7Ch): Supported 00:16:05.910 Format NVM (80h): Supported LBA-Change 00:16:05.910 I/O Commands 00:16:05.910 ------------ 00:16:05.910 Flush (00h): Supported LBA-Change 00:16:05.910 Write (01h): Supported LBA-Change 00:16:05.910 Read (02h): Supported 00:16:05.910 Compare (05h): Supported 00:16:05.910 Write Zeroes (08h): Supported LBA-Change 00:16:05.910 Dataset Management (09h): Supported LBA-Change 00:16:05.910 Unknown (0Ch): Supported 00:16:05.910 Unknown (12h): Supported 00:16:05.910 Copy (19h): Supported LBA-Change 00:16:05.910 Unknown (1Dh): Supported LBA-Change 00:16:05.910 00:16:05.910 Error Log 00:16:05.910 ========= 00:16:05.910 00:16:05.910 Arbitration 00:16:05.910 =========== 00:16:05.910 Arbitration Burst: no limit 00:16:05.910 00:16:05.910 Power Management 00:16:05.910 ================ 00:16:05.910 Number of Power States: 1 00:16:05.910 Current Power State: Power State #0 00:16:05.910 Power State #0: 00:16:05.910 Max Power: 25.00 W 00:16:05.910 Non-Operational State: Operational 00:16:05.910 Entry Latency: 16 microseconds 00:16:05.910 Exit Latency: 4 microseconds 00:16:05.910 Relative Read Throughput: 0 00:16:05.910 Relative Read Latency: 0 00:16:05.910 Relative Write Throughput: 0 00:16:05.910 Relative Write Latency: 0 00:16:05.910 Idle Power: Not Reported 00:16:05.910 Active Power: Not Reported 00:16:05.910 Non-Operational Permissive Mode: Not Supported 00:16:05.910 00:16:05.910 Health Information 00:16:05.910 ================== 00:16:05.910 Critical Warnings: 00:16:05.910 Available Spare Space: OK 00:16:05.910 Temperature: OK 00:16:05.910 Device Reliability: OK 00:16:05.910 Read Only: No 00:16:05.910 Volatile Memory Backup: OK 00:16:05.910 Current Temperature: 323 Kelvin (50 Celsius) 00:16:05.910 Temperature Threshold: 343 Kelvin (70 Celsius) 00:16:05.910 Available Spare: 0% 00:16:05.910 Available Spare Threshold: 0% 00:16:05.910 Life Percentage Used: 0% 00:16:05.910 Data Units Read: 904 00:16:05.910 Data Units Written: 833 00:16:05.910 Host Read Commands: 38223 00:16:05.910 Host Write Commands: 37646 00:16:05.910 Controller Busy Time: 0 minutes 00:16:05.910 Power Cycles: 0 00:16:05.910 Power On Hours: 0 hours 00:16:05.910 Unsafe Shutdowns: 0 00:16:05.910 Unrecoverable Media Errors: 0 00:16:05.910 Lifetime Error Log Entries: 0 00:16:05.910 Warning Temperature Time: 0 minutes 00:16:05.910 Critical Temperature Time: 0 minutes 00:16:05.910 00:16:05.910 Number of Queues 00:16:05.910 ================ 00:16:05.910 Number of I/O Submission Queues: 64 00:16:05.910 Number of I/O Completion Queues: 64 00:16:05.910 00:16:05.910 ZNS Specific Controller Data 00:16:05.910 ============================ 00:16:05.910 Zone Append Size Limit: 0 00:16:05.910 00:16:05.910 00:16:05.910 Active Namespaces 00:16:05.910 ================= 00:16:05.910 Namespace ID:1 00:16:05.910 Error Recovery Timeout: Unlimited 00:16:05.910 Command Set Identifier: NVM (00h) 00:16:05.910 Deallocate: Supported 00:16:05.910 Deallocated/Unwritten Error: Supported 00:16:05.910 Deallocated Read Value: All 0x00 00:16:05.910 Deallocate in Write Zeroes: Not Supported 00:16:05.910 Deallocated Guard Field: 0xFFFF 00:16:05.910 Flush: Supported 00:16:05.910 Reservation: Not Supported 00:16:05.910 Namespace Sharing Capabilities: Multiple Controllers 00:16:05.910 Size (in LBAs): 262144 (1GiB) 00:16:05.910 Capacity (in LBAs): 262144 (1GiB) 00:16:05.910 Utilization (in LBAs): 262144 (1GiB) 00:16:05.910 Thin Provisioning: Not Supported 00:16:05.910 Per-NS Atomic Units: No 00:16:05.910 Maximum Single Source Range Length: 128 00:16:05.910 Maximum Copy Length: 128 00:16:05.910 Maximum Source Range Count: 128 00:16:05.910 NGUID/EUI64 Never Reused: No 00:16:05.910 Namespace Write Protected: No 00:16:05.910 Endurance group ID: 1 00:16:05.910 Number of LBA Formats: 8 00:16:05.910 Current LBA Format: LBA Format #04 00:16:05.910 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:05.910 LBA Format #01: Data Size: 512 Metadata Size: 8 00:16:05.910 LBA Format #02: Data Size: 512 Metadata Size: 16 00:16:05.910 LBA Format #03: Data Size: 512 Metadata Size: 64 00:16:05.910 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:16:05.910 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:16:05.910 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:16:05.910 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:16:05.910 00:16:05.910 Get Feature FDP: 00:16:05.910 ================ 00:16:05.910 Enabled: Yes 00:16:05.910 FDP configuration index: 0 00:16:05.910 00:16:05.910 FDP configurations log page 00:16:05.910 =========================== 00:16:05.910 Number of FDP configurations: 1 00:16:05.910 Version: 0 00:16:05.910 Size: 112 00:16:05.910 FDP Configuration Descriptor: 0 00:16:05.910 Descriptor Size: 96 00:16:05.910 Reclaim Group Identifier format: 2 00:16:05.910 FDP Volatile Write Cache: Not Present 00:16:05.910 FDP Configuration: Valid 00:16:05.910 Vendor Specific Size: 0 00:16:05.910 Number of Reclaim Groups: 2 00:16:05.910 Number of Recalim Unit Handles: 8 00:16:05.910 Max Placement Identifiers: 128 00:16:05.910 Number of Namespaces Suppprted: 256 00:16:05.910 Reclaim unit Nominal Size: 6000000 bytes 00:16:05.910 Estimated Reclaim Unit Time Limit: Not Reported 00:16:05.910 RUH Desc #000: RUH Type: Initially Isolated 00:16:05.910 RUH Desc #001: RUH Type: Initially Isolated 00:16:05.910 RUH Desc #002: RUH Type: Initially Isolated 00:16:05.910 RUH Desc #003: RUH Type: Initially Isolated 00:16:05.910 RUH Desc #004: RUH Type: Initially Isolated 00:16:05.910 RUH Desc #005: RUH Type: Initially Isolated 00:16:05.910 RUH Desc #006: RUH Type: Initially Isolated 00:16:05.910 RUH Desc #007: RUH Type: Initially Isolated 00:16:05.910 00:16:05.910 FDP reclaim unit handle usage log page 00:16:05.910 ====================================== 00:16:05.910 Number of Reclaim Unit Handles: 8 00:16:05.910 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:16:05.910 RUH Usage Desc #001: RUH Attributes: Unused 00:16:05.910 RUH Usage Desc #002: RUH Attributes: Unused 00:16:05.910 RUH Usage Desc #003: RUH Attributes: Unused 00:16:05.910 RUH Usage Desc #004: RUH Attributes: Unused 00:16:05.910 RUH Usage Desc #005: RUH Attributes: Unused 00:16:05.910 RUH Usage Desc #006: RUH Attributes: Unused 00:16:05.910 RUH Usage Desc #007: RUH Attributes: Unused 00:16:05.910 00:16:05.910 FDP statistics log page 00:16:05.910 ======================= 00:16:05.910 Host bytes with metadata written: 526884864 00:16:05.910 Media bytes with metadata written: 526942208 00:16:05.910 Media bytes erased: 0 00:16:05.910 00:16:05.910 FDP events log page 00:16:05.910 =================== 00:16:05.910 Number of FDP events: 0 00:16:05.910 00:16:05.910 NVM Specific Namespace Data 00:16:05.910 =========================== 00:16:05.910 Logical Block Storage Tag Mask: 0 00:16:05.910 Protection Information Capabilities: 00:16:05.910 16b Guard Protection Information Storage Tag Support: No 00:16:05.910 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:16:05.910 Storage Tag Check Read Support: No 00:16:05.910 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:16:05.910 00:16:05.910 real 0m1.795s 00:16:05.910 user 0m0.660s 00:16:05.910 sys 0m0.914s 00:16:05.911 22:57:33 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.911 22:57:33 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:16:05.911 ************************************ 00:16:05.911 END TEST nvme_identify 00:16:05.911 ************************************ 00:16:05.911 22:57:33 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:16:05.911 22:57:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:05.911 22:57:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.911 22:57:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.911 ************************************ 00:16:05.911 START TEST nvme_perf 00:16:05.911 ************************************ 00:16:05.911 22:57:33 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:16:05.911 22:57:33 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:16:07.291 Initializing NVMe Controllers 00:16:07.291 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:07.291 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:07.291 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:07.291 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:07.291 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:07.291 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:16:07.291 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:16:07.291 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:16:07.291 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:16:07.291 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:16:07.291 Initialization complete. Launching workers. 00:16:07.291 ======================================================== 00:16:07.291 Latency(us) 00:16:07.291 Device Information : IOPS MiB/s Average min max 00:16:07.291 PCIE (0000:00:10.0) NSID 1 from core 0: 12955.10 151.82 9916.77 7680.97 52993.74 00:16:07.291 PCIE (0000:00:11.0) NSID 1 from core 0: 12955.10 151.82 9901.85 7741.59 50412.63 00:16:07.291 PCIE (0000:00:13.0) NSID 1 from core 0: 12955.10 151.82 9886.00 7757.22 48429.45 00:16:07.291 PCIE (0000:00:12.0) NSID 1 from core 0: 12955.10 151.82 9870.19 7751.94 46001.78 00:16:07.291 PCIE (0000:00:12.0) NSID 2 from core 0: 12955.10 151.82 9852.50 7773.28 43654.19 00:16:07.291 PCIE (0000:00:12.0) NSID 3 from core 0: 13018.92 152.57 9785.96 7777.74 36388.74 00:16:07.291 ======================================================== 00:16:07.291 Total : 77794.44 911.65 9868.81 7680.97 52993.74 00:16:07.291 00:16:07.291 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:07.291 ================================================================================= 00:16:07.291 1.00000% : 7948.543us 00:16:07.291 10.00000% : 8211.740us 00:16:07.291 25.00000% : 8527.576us 00:16:07.291 50.00000% : 8896.051us 00:16:07.291 75.00000% : 9633.002us 00:16:07.291 90.00000% : 12317.610us 00:16:07.291 95.00000% : 14739.020us 00:16:07.291 98.00000% : 17581.545us 00:16:07.291 99.00000% : 20845.186us 00:16:07.291 99.50000% : 45901.520us 00:16:07.291 99.90000% : 52639.357us 00:16:07.291 99.99000% : 53060.472us 00:16:07.291 99.99900% : 53060.472us 00:16:07.291 99.99990% : 53060.472us 00:16:07.291 99.99999% : 53060.472us 00:16:07.291 00:16:07.291 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:16:07.291 ================================================================================= 00:16:07.291 1.00000% : 8001.182us 00:16:07.291 10.00000% : 8264.379us 00:16:07.291 25.00000% : 8527.576us 00:16:07.291 50.00000% : 8896.051us 00:16:07.291 75.00000% : 9580.363us 00:16:07.291 90.00000% : 12370.249us 00:16:07.291 95.00000% : 14739.020us 00:16:07.291 98.00000% : 17792.103us 00:16:07.291 99.00000% : 20424.071us 00:16:07.291 99.50000% : 43585.388us 00:16:07.291 99.90000% : 50112.668us 00:16:07.291 99.99000% : 50533.783us 00:16:07.291 99.99900% : 50533.783us 00:16:07.291 99.99990% : 50533.783us 00:16:07.291 99.99999% : 50533.783us 00:16:07.291 00:16:07.291 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:16:07.291 ================================================================================= 00:16:07.291 1.00000% : 8001.182us 00:16:07.291 10.00000% : 8264.379us 00:16:07.291 25.00000% : 8527.576us 00:16:07.291 50.00000% : 8896.051us 00:16:07.291 75.00000% : 9633.002us 00:16:07.291 90.00000% : 12422.888us 00:16:07.291 95.00000% : 14844.299us 00:16:07.291 98.00000% : 17792.103us 00:16:07.291 99.00000% : 19581.841us 00:16:07.291 99.50000% : 41690.371us 00:16:07.291 99.90000% : 48217.651us 00:16:07.291 99.99000% : 48428.209us 00:16:07.291 99.99900% : 48638.766us 00:16:07.291 99.99990% : 48638.766us 00:16:07.291 99.99999% : 48638.766us 00:16:07.291 00:16:07.291 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:16:07.291 ================================================================================= 00:16:07.291 1.00000% : 8001.182us 00:16:07.291 10.00000% : 8264.379us 00:16:07.291 25.00000% : 8527.576us 00:16:07.291 50.00000% : 8896.051us 00:16:07.291 75.00000% : 9633.002us 00:16:07.291 90.00000% : 12370.249us 00:16:07.291 95.00000% : 14844.299us 00:16:07.291 98.00000% : 18002.660us 00:16:07.291 99.00000% : 19581.841us 00:16:07.291 99.50000% : 39374.239us 00:16:07.291 99.90000% : 45690.962us 00:16:07.291 99.99000% : 46112.077us 00:16:07.291 99.99900% : 46112.077us 00:16:07.291 99.99990% : 46112.077us 00:16:07.291 99.99999% : 46112.077us 00:16:07.291 00:16:07.291 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:16:07.291 ================================================================================= 00:16:07.291 1.00000% : 8001.182us 00:16:07.291 10.00000% : 8264.379us 00:16:07.291 25.00000% : 8527.576us 00:16:07.291 50.00000% : 8896.051us 00:16:07.291 75.00000% : 9633.002us 00:16:07.291 90.00000% : 12317.610us 00:16:07.291 95.00000% : 14844.299us 00:16:07.291 98.00000% : 17792.103us 00:16:07.291 99.00000% : 20318.792us 00:16:07.291 99.50000% : 37268.665us 00:16:07.291 99.90000% : 43374.831us 00:16:07.291 99.99000% : 43795.945us 00:16:07.291 99.99900% : 43795.945us 00:16:07.291 99.99990% : 43795.945us 00:16:07.291 99.99999% : 43795.945us 00:16:07.291 00:16:07.291 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:16:07.291 ================================================================================= 00:16:07.291 1.00000% : 8001.182us 00:16:07.291 10.00000% : 8264.379us 00:16:07.291 25.00000% : 8527.576us 00:16:07.291 50.00000% : 8896.051us 00:16:07.291 75.00000% : 9633.002us 00:16:07.291 90.00000% : 12317.610us 00:16:07.291 95.00000% : 14844.299us 00:16:07.291 98.00000% : 17581.545us 00:16:07.291 99.00000% : 20424.071us 00:16:07.291 99.50000% : 29478.040us 00:16:07.291 99.90000% : 36005.320us 00:16:07.291 99.99000% : 36426.435us 00:16:07.291 99.99900% : 36426.435us 00:16:07.291 99.99990% : 36426.435us 00:16:07.291 99.99999% : 36426.435us 00:16:07.291 00:16:07.291 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:07.291 ============================================================================== 00:16:07.291 Range in us Cumulative IO count 00:16:07.291 7632.707 - 7685.346: 0.0154% ( 2) 00:16:07.291 7685.346 - 7737.986: 0.0693% ( 7) 00:16:07.291 7737.986 - 7790.625: 0.2463% ( 23) 00:16:07.291 7790.625 - 7843.264: 0.5619% ( 41) 00:16:07.291 7843.264 - 7895.904: 0.9852% ( 55) 00:16:07.291 7895.904 - 7948.543: 1.6780% ( 90) 00:16:07.291 7948.543 - 8001.182: 2.7248% ( 136) 00:16:07.291 8001.182 - 8053.822: 4.1718% ( 188) 00:16:07.291 8053.822 - 8106.461: 6.2885% ( 275) 00:16:07.291 8106.461 - 8159.100: 8.6053% ( 301) 00:16:07.291 8159.100 - 8211.740: 11.1761% ( 334) 00:16:07.291 8211.740 - 8264.379: 13.8470% ( 347) 00:16:07.291 8264.379 - 8317.018: 16.5025% ( 345) 00:16:07.291 8317.018 - 8369.658: 19.2811% ( 361) 00:16:07.291 8369.658 - 8422.297: 22.1521% ( 373) 00:16:07.291 8422.297 - 8474.937: 24.9615% ( 365) 00:16:07.291 8474.937 - 8527.576: 27.9865% ( 393) 00:16:07.291 8527.576 - 8580.215: 31.1268% ( 408) 00:16:07.291 8580.215 - 8632.855: 34.1903% ( 398) 00:16:07.292 8632.855 - 8685.494: 37.3999% ( 417) 00:16:07.292 8685.494 - 8738.133: 40.6635% ( 424) 00:16:07.292 8738.133 - 8790.773: 43.8885% ( 419) 00:16:07.292 8790.773 - 8843.412: 47.1675% ( 426) 00:16:07.292 8843.412 - 8896.051: 50.3618% ( 415) 00:16:07.292 8896.051 - 8948.691: 53.6176% ( 423) 00:16:07.292 8948.691 - 9001.330: 56.7580% ( 408) 00:16:07.292 9001.330 - 9053.969: 59.8214% ( 398) 00:16:07.292 9053.969 - 9106.609: 62.4076% ( 336) 00:16:07.292 9106.609 - 9159.248: 64.7552% ( 305) 00:16:07.292 9159.248 - 9211.888: 66.3947% ( 213) 00:16:07.292 9211.888 - 9264.527: 67.9341% ( 200) 00:16:07.292 9264.527 - 9317.166: 69.2118% ( 166) 00:16:07.292 9317.166 - 9369.806: 70.3895% ( 153) 00:16:07.292 9369.806 - 9422.445: 71.5132% ( 146) 00:16:07.292 9422.445 - 9475.084: 72.5754% ( 138) 00:16:07.292 9475.084 - 9527.724: 73.6376% ( 138) 00:16:07.292 9527.724 - 9580.363: 74.6536% ( 132) 00:16:07.292 9580.363 - 9633.002: 75.6619% ( 131) 00:16:07.292 9633.002 - 9685.642: 76.6164% ( 124) 00:16:07.292 9685.642 - 9738.281: 77.4861% ( 113) 00:16:07.292 9738.281 - 9790.920: 78.2558% ( 100) 00:16:07.292 9790.920 - 9843.560: 78.9871% ( 95) 00:16:07.292 9843.560 - 9896.199: 79.6567% ( 87) 00:16:07.292 9896.199 - 9948.839: 80.2186% ( 73) 00:16:07.292 9948.839 - 10001.478: 80.7805% ( 73) 00:16:07.292 10001.478 - 10054.117: 81.1653% ( 50) 00:16:07.292 10054.117 - 10106.757: 81.5579% ( 51) 00:16:07.292 10106.757 - 10159.396: 81.8427% ( 37) 00:16:07.292 10159.396 - 10212.035: 82.1121% ( 35) 00:16:07.292 10212.035 - 10264.675: 82.3738% ( 34) 00:16:07.292 10264.675 - 10317.314: 82.5893% ( 28) 00:16:07.292 10317.314 - 10369.953: 82.8125% ( 29) 00:16:07.292 10369.953 - 10422.593: 83.0511% ( 31) 00:16:07.292 10422.593 - 10475.232: 83.2512% ( 26) 00:16:07.292 10475.232 - 10527.871: 83.4898% ( 31) 00:16:07.292 10527.871 - 10580.511: 83.6592% ( 22) 00:16:07.292 10580.511 - 10633.150: 83.8747% ( 28) 00:16:07.292 10633.150 - 10685.790: 84.0209% ( 19) 00:16:07.292 10685.790 - 10738.429: 84.2211% ( 26) 00:16:07.292 10738.429 - 10791.068: 84.3981% ( 23) 00:16:07.292 10791.068 - 10843.708: 84.5674% ( 22) 00:16:07.292 10843.708 - 10896.347: 84.6983% ( 17) 00:16:07.292 10896.347 - 10948.986: 84.8676% ( 22) 00:16:07.292 10948.986 - 11001.626: 85.0216% ( 20) 00:16:07.292 11001.626 - 11054.265: 85.1370% ( 15) 00:16:07.292 11054.265 - 11106.904: 85.2679% ( 17) 00:16:07.292 11106.904 - 11159.544: 85.4526% ( 24) 00:16:07.292 11159.544 - 11212.183: 85.6219% ( 22) 00:16:07.292 11212.183 - 11264.822: 85.7990% ( 23) 00:16:07.292 11264.822 - 11317.462: 85.9760% ( 23) 00:16:07.292 11317.462 - 11370.101: 86.1838% ( 27) 00:16:07.292 11370.101 - 11422.741: 86.3762% ( 25) 00:16:07.292 11422.741 - 11475.380: 86.5917% ( 28) 00:16:07.292 11475.380 - 11528.019: 86.8458% ( 33) 00:16:07.292 11528.019 - 11580.659: 87.0921% ( 32) 00:16:07.292 11580.659 - 11633.298: 87.3615% ( 35) 00:16:07.292 11633.298 - 11685.937: 87.5462% ( 24) 00:16:07.292 11685.937 - 11738.577: 87.8002% ( 33) 00:16:07.292 11738.577 - 11791.216: 88.0619% ( 34) 00:16:07.292 11791.216 - 11843.855: 88.2697% ( 27) 00:16:07.292 11843.855 - 11896.495: 88.4698% ( 26) 00:16:07.292 11896.495 - 11949.134: 88.7623% ( 38) 00:16:07.292 11949.134 - 12001.773: 88.9624% ( 26) 00:16:07.292 12001.773 - 12054.413: 89.1857% ( 29) 00:16:07.292 12054.413 - 12107.052: 89.4166% ( 30) 00:16:07.292 12107.052 - 12159.692: 89.6090% ( 25) 00:16:07.292 12159.692 - 12212.331: 89.7937% ( 24) 00:16:07.292 12212.331 - 12264.970: 89.9477% ( 20) 00:16:07.292 12264.970 - 12317.610: 90.1401% ( 25) 00:16:07.292 12317.610 - 12370.249: 90.2786% ( 18) 00:16:07.292 12370.249 - 12422.888: 90.4095% ( 17) 00:16:07.292 12422.888 - 12475.528: 90.5788% ( 22) 00:16:07.292 12475.528 - 12528.167: 90.7328% ( 20) 00:16:07.292 12528.167 - 12580.806: 90.8559% ( 16) 00:16:07.292 12580.806 - 12633.446: 91.0329% ( 23) 00:16:07.292 12633.446 - 12686.085: 91.2100% ( 23) 00:16:07.292 12686.085 - 12738.724: 91.3331% ( 16) 00:16:07.292 12738.724 - 12791.364: 91.5486% ( 28) 00:16:07.292 12791.364 - 12844.003: 91.7257% ( 23) 00:16:07.292 12844.003 - 12896.643: 91.8873% ( 21) 00:16:07.292 12896.643 - 12949.282: 91.9566% ( 9) 00:16:07.292 12949.282 - 13001.921: 92.0643% ( 14) 00:16:07.292 13001.921 - 13054.561: 92.1875% ( 16) 00:16:07.292 13054.561 - 13107.200: 92.2722% ( 11) 00:16:07.292 13107.200 - 13159.839: 92.3799% ( 14) 00:16:07.292 13159.839 - 13212.479: 92.4261% ( 6) 00:16:07.292 13212.479 - 13265.118: 92.5262% ( 13) 00:16:07.292 13265.118 - 13317.757: 92.6185% ( 12) 00:16:07.292 13317.757 - 13370.397: 92.7032% ( 11) 00:16:07.292 13370.397 - 13423.036: 92.8033% ( 13) 00:16:07.292 13423.036 - 13475.676: 92.9187% ( 15) 00:16:07.292 13475.676 - 13580.954: 93.1265% ( 27) 00:16:07.292 13580.954 - 13686.233: 93.2882% ( 21) 00:16:07.292 13686.233 - 13791.512: 93.4729% ( 24) 00:16:07.292 13791.512 - 13896.790: 93.6345% ( 21) 00:16:07.292 13896.790 - 14002.069: 93.7654% ( 17) 00:16:07.292 14002.069 - 14107.348: 93.9193% ( 20) 00:16:07.292 14107.348 - 14212.627: 94.1118% ( 25) 00:16:07.292 14212.627 - 14317.905: 94.3504% ( 31) 00:16:07.292 14317.905 - 14423.184: 94.5197% ( 22) 00:16:07.292 14423.184 - 14528.463: 94.7429% ( 29) 00:16:07.292 14528.463 - 14633.741: 94.8892% ( 19) 00:16:07.292 14633.741 - 14739.020: 95.1201% ( 30) 00:16:07.292 14739.020 - 14844.299: 95.3125% ( 25) 00:16:07.292 14844.299 - 14949.578: 95.4587% ( 19) 00:16:07.292 14949.578 - 15054.856: 95.6050% ( 19) 00:16:07.292 15054.856 - 15160.135: 95.7358% ( 17) 00:16:07.292 15160.135 - 15265.414: 95.8513% ( 15) 00:16:07.292 15265.414 - 15370.692: 95.9821% ( 17) 00:16:07.292 15370.692 - 15475.971: 96.1207% ( 18) 00:16:07.292 15475.971 - 15581.250: 96.1900% ( 9) 00:16:07.292 15581.250 - 15686.529: 96.2823% ( 12) 00:16:07.292 15686.529 - 15791.807: 96.3978% ( 15) 00:16:07.292 15791.807 - 15897.086: 96.5594% ( 21) 00:16:07.292 15897.086 - 16002.365: 96.6749% ( 15) 00:16:07.292 16002.365 - 16107.643: 96.7442% ( 9) 00:16:07.292 16107.643 - 16212.922: 96.8057% ( 8) 00:16:07.292 16212.922 - 16318.201: 96.8673% ( 8) 00:16:07.292 16318.201 - 16423.480: 96.9597% ( 12) 00:16:07.292 16423.480 - 16528.758: 97.0366% ( 10) 00:16:07.292 16528.758 - 16634.037: 97.1213% ( 11) 00:16:07.292 16634.037 - 16739.316: 97.2214% ( 13) 00:16:07.292 16739.316 - 16844.594: 97.3060% ( 11) 00:16:07.292 16844.594 - 16949.873: 97.4138% ( 14) 00:16:07.292 16949.873 - 17055.152: 97.5369% ( 16) 00:16:07.292 17055.152 - 17160.431: 97.6601% ( 16) 00:16:07.292 17160.431 - 17265.709: 97.7833% ( 16) 00:16:07.292 17265.709 - 17370.988: 97.8910% ( 14) 00:16:07.292 17370.988 - 17476.267: 97.9680% ( 10) 00:16:07.292 17476.267 - 17581.545: 98.0603% ( 12) 00:16:07.292 17581.545 - 17686.824: 98.1681% ( 14) 00:16:07.292 17686.824 - 17792.103: 98.2682% ( 13) 00:16:07.292 17792.103 - 17897.382: 98.3451% ( 10) 00:16:07.292 17897.382 - 18002.660: 98.4452% ( 13) 00:16:07.292 18002.660 - 18107.939: 98.4991% ( 7) 00:16:07.292 18107.939 - 18213.218: 98.5222% ( 3) 00:16:07.292 19055.447 - 19160.726: 98.5376% ( 2) 00:16:07.292 19160.726 - 19266.005: 98.5760% ( 5) 00:16:07.292 19266.005 - 19371.284: 98.5991% ( 3) 00:16:07.292 19371.284 - 19476.562: 98.6299% ( 4) 00:16:07.292 19476.562 - 19581.841: 98.6607% ( 4) 00:16:07.292 19581.841 - 19687.120: 98.6915% ( 4) 00:16:07.292 19687.120 - 19792.398: 98.7300% ( 5) 00:16:07.292 19792.398 - 19897.677: 98.7531% ( 3) 00:16:07.292 19897.677 - 20002.956: 98.7762% ( 3) 00:16:07.292 20002.956 - 20108.235: 98.8070% ( 4) 00:16:07.292 20108.235 - 20213.513: 98.8377% ( 4) 00:16:07.292 20213.513 - 20318.792: 98.8608% ( 3) 00:16:07.292 20318.792 - 20424.071: 98.8993% ( 5) 00:16:07.292 20424.071 - 20529.349: 98.9224% ( 3) 00:16:07.292 20529.349 - 20634.628: 98.9532% ( 4) 00:16:07.292 20634.628 - 20739.907: 98.9840% ( 4) 00:16:07.292 20739.907 - 20845.186: 99.0148% ( 4) 00:16:07.292 43585.388 - 43795.945: 99.0533% ( 5) 00:16:07.292 43795.945 - 44006.503: 99.0994% ( 6) 00:16:07.292 44006.503 - 44217.060: 99.1456% ( 6) 00:16:07.292 44217.060 - 44427.618: 99.1918% ( 6) 00:16:07.292 44427.618 - 44638.175: 99.2457% ( 7) 00:16:07.292 44638.175 - 44848.733: 99.2919% ( 6) 00:16:07.292 44848.733 - 45059.290: 99.3458% ( 7) 00:16:07.292 45059.290 - 45269.847: 99.3919% ( 6) 00:16:07.292 45269.847 - 45480.405: 99.4304% ( 5) 00:16:07.292 45480.405 - 45690.962: 99.4843% ( 7) 00:16:07.292 45690.962 - 45901.520: 99.5074% ( 3) 00:16:07.292 50744.341 - 50954.898: 99.5459% ( 5) 00:16:07.292 50954.898 - 51165.455: 99.5921% ( 6) 00:16:07.292 51165.455 - 51376.013: 99.6305% ( 5) 00:16:07.292 51376.013 - 51586.570: 99.6767% ( 6) 00:16:07.292 51586.570 - 51797.128: 99.7229% ( 6) 00:16:07.292 51797.128 - 52007.685: 99.7768% ( 7) 00:16:07.292 52007.685 - 52218.243: 99.8307% ( 7) 00:16:07.292 52218.243 - 52428.800: 99.8692% ( 5) 00:16:07.292 52428.800 - 52639.357: 99.9230% ( 7) 00:16:07.292 52639.357 - 52849.915: 99.9769% ( 7) 00:16:07.292 52849.915 - 53060.472: 100.0000% ( 3) 00:16:07.292 00:16:07.292 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:16:07.292 ============================================================================== 00:16:07.292 Range in us Cumulative IO count 00:16:07.292 7737.986 - 7790.625: 0.0385% ( 5) 00:16:07.292 7790.625 - 7843.264: 0.1308% ( 12) 00:16:07.292 7843.264 - 7895.904: 0.4464% ( 41) 00:16:07.292 7895.904 - 7948.543: 0.7851% ( 44) 00:16:07.292 7948.543 - 8001.182: 1.3316% ( 71) 00:16:07.292 8001.182 - 8053.822: 2.2860% ( 124) 00:16:07.292 8053.822 - 8106.461: 3.7485% ( 190) 00:16:07.292 8106.461 - 8159.100: 5.7189% ( 256) 00:16:07.292 8159.100 - 8211.740: 8.4667% ( 357) 00:16:07.292 8211.740 - 8264.379: 11.2608% ( 363) 00:16:07.292 8264.379 - 8317.018: 14.3627% ( 403) 00:16:07.292 8317.018 - 8369.658: 17.5108% ( 409) 00:16:07.292 8369.658 - 8422.297: 20.7435% ( 420) 00:16:07.292 8422.297 - 8474.937: 24.0841% ( 434) 00:16:07.293 8474.937 - 8527.576: 27.6478% ( 463) 00:16:07.293 8527.576 - 8580.215: 31.1576% ( 456) 00:16:07.293 8580.215 - 8632.855: 34.6829% ( 458) 00:16:07.293 8632.855 - 8685.494: 38.2235% ( 460) 00:16:07.293 8685.494 - 8738.133: 41.8026% ( 465) 00:16:07.293 8738.133 - 8790.773: 45.4972% ( 480) 00:16:07.293 8790.773 - 8843.412: 48.9763% ( 452) 00:16:07.293 8843.412 - 8896.051: 52.5708% ( 467) 00:16:07.293 8896.051 - 8948.691: 55.8651% ( 428) 00:16:07.293 8948.691 - 9001.330: 58.9286% ( 398) 00:16:07.293 9001.330 - 9053.969: 61.4994% ( 334) 00:16:07.293 9053.969 - 9106.609: 63.7084% ( 287) 00:16:07.293 9106.609 - 9159.248: 65.4326% ( 224) 00:16:07.293 9159.248 - 9211.888: 66.8950% ( 190) 00:16:07.293 9211.888 - 9264.527: 68.1881% ( 168) 00:16:07.293 9264.527 - 9317.166: 69.4273% ( 161) 00:16:07.293 9317.166 - 9369.806: 70.6743% ( 162) 00:16:07.293 9369.806 - 9422.445: 71.8750% ( 156) 00:16:07.293 9422.445 - 9475.084: 72.9449% ( 139) 00:16:07.293 9475.084 - 9527.724: 73.9917% ( 136) 00:16:07.293 9527.724 - 9580.363: 75.0693% ( 140) 00:16:07.293 9580.363 - 9633.002: 76.0930% ( 133) 00:16:07.293 9633.002 - 9685.642: 77.1167% ( 133) 00:16:07.293 9685.642 - 9738.281: 78.0480% ( 121) 00:16:07.293 9738.281 - 9790.920: 78.8331% ( 102) 00:16:07.293 9790.920 - 9843.560: 79.5643% ( 95) 00:16:07.293 9843.560 - 9896.199: 80.1801% ( 80) 00:16:07.293 9896.199 - 9948.839: 80.7266% ( 71) 00:16:07.293 9948.839 - 10001.478: 81.1807% ( 59) 00:16:07.293 10001.478 - 10054.117: 81.5348% ( 46) 00:16:07.293 10054.117 - 10106.757: 81.8658% ( 43) 00:16:07.293 10106.757 - 10159.396: 82.1813% ( 41) 00:16:07.293 10159.396 - 10212.035: 82.4738% ( 38) 00:16:07.293 10212.035 - 10264.675: 82.7509% ( 36) 00:16:07.293 10264.675 - 10317.314: 83.0126% ( 34) 00:16:07.293 10317.314 - 10369.953: 83.2435% ( 30) 00:16:07.293 10369.953 - 10422.593: 83.5052% ( 34) 00:16:07.293 10422.593 - 10475.232: 83.6977% ( 25) 00:16:07.293 10475.232 - 10527.871: 83.8824% ( 24) 00:16:07.293 10527.871 - 10580.511: 84.1133% ( 30) 00:16:07.293 10580.511 - 10633.150: 84.2672% ( 20) 00:16:07.293 10633.150 - 10685.790: 84.3981% ( 17) 00:16:07.293 10685.790 - 10738.429: 84.5366% ( 18) 00:16:07.293 10738.429 - 10791.068: 84.6367% ( 13) 00:16:07.293 10791.068 - 10843.708: 84.7291% ( 12) 00:16:07.293 10843.708 - 10896.347: 84.8137% ( 11) 00:16:07.293 10896.347 - 10948.986: 84.9369% ( 16) 00:16:07.293 10948.986 - 11001.626: 85.0523% ( 15) 00:16:07.293 11001.626 - 11054.265: 85.2063% ( 20) 00:16:07.293 11054.265 - 11106.904: 85.3525% ( 19) 00:16:07.293 11106.904 - 11159.544: 85.4834% ( 17) 00:16:07.293 11159.544 - 11212.183: 85.6219% ( 18) 00:16:07.293 11212.183 - 11264.822: 85.7759% ( 20) 00:16:07.293 11264.822 - 11317.462: 85.9606% ( 24) 00:16:07.293 11317.462 - 11370.101: 86.1607% ( 26) 00:16:07.293 11370.101 - 11422.741: 86.3993% ( 31) 00:16:07.293 11422.741 - 11475.380: 86.5994% ( 26) 00:16:07.293 11475.380 - 11528.019: 86.8073% ( 27) 00:16:07.293 11528.019 - 11580.659: 86.9920% ( 24) 00:16:07.293 11580.659 - 11633.298: 87.2152% ( 29) 00:16:07.293 11633.298 - 11685.937: 87.4307% ( 28) 00:16:07.293 11685.937 - 11738.577: 87.7232% ( 38) 00:16:07.293 11738.577 - 11791.216: 87.9310% ( 27) 00:16:07.293 11791.216 - 11843.855: 88.1389% ( 27) 00:16:07.293 11843.855 - 11896.495: 88.3236% ( 24) 00:16:07.293 11896.495 - 11949.134: 88.5160% ( 25) 00:16:07.293 11949.134 - 12001.773: 88.6853% ( 22) 00:16:07.293 12001.773 - 12054.413: 88.8316% ( 19) 00:16:07.293 12054.413 - 12107.052: 88.9932% ( 21) 00:16:07.293 12107.052 - 12159.692: 89.1703% ( 23) 00:16:07.293 12159.692 - 12212.331: 89.3627% ( 25) 00:16:07.293 12212.331 - 12264.970: 89.5551% ( 25) 00:16:07.293 12264.970 - 12317.610: 89.8091% ( 33) 00:16:07.293 12317.610 - 12370.249: 90.0092% ( 26) 00:16:07.293 12370.249 - 12422.888: 90.1863% ( 23) 00:16:07.293 12422.888 - 12475.528: 90.3941% ( 27) 00:16:07.293 12475.528 - 12528.167: 90.5865% ( 25) 00:16:07.293 12528.167 - 12580.806: 90.7558% ( 22) 00:16:07.293 12580.806 - 12633.446: 90.9329% ( 23) 00:16:07.293 12633.446 - 12686.085: 91.0791% ( 19) 00:16:07.293 12686.085 - 12738.724: 91.2331% ( 20) 00:16:07.293 12738.724 - 12791.364: 91.3562% ( 16) 00:16:07.293 12791.364 - 12844.003: 91.4794% ( 16) 00:16:07.293 12844.003 - 12896.643: 91.5871% ( 14) 00:16:07.293 12896.643 - 12949.282: 91.6949% ( 14) 00:16:07.293 12949.282 - 13001.921: 91.8026% ( 14) 00:16:07.293 13001.921 - 13054.561: 91.8796% ( 10) 00:16:07.293 13054.561 - 13107.200: 91.9720% ( 12) 00:16:07.293 13107.200 - 13159.839: 92.0413% ( 9) 00:16:07.293 13159.839 - 13212.479: 92.1413% ( 13) 00:16:07.293 13212.479 - 13265.118: 92.2414% ( 13) 00:16:07.293 13265.118 - 13317.757: 92.3183% ( 10) 00:16:07.293 13317.757 - 13370.397: 92.3876% ( 9) 00:16:07.293 13370.397 - 13423.036: 92.4569% ( 9) 00:16:07.293 13423.036 - 13475.676: 92.5108% ( 7) 00:16:07.293 13475.676 - 13580.954: 92.6647% ( 20) 00:16:07.293 13580.954 - 13686.233: 92.8264% ( 21) 00:16:07.293 13686.233 - 13791.512: 93.0573% ( 30) 00:16:07.293 13791.512 - 13896.790: 93.2728% ( 28) 00:16:07.293 13896.790 - 14002.069: 93.4960% ( 29) 00:16:07.293 14002.069 - 14107.348: 93.7346% ( 31) 00:16:07.293 14107.348 - 14212.627: 94.0040% ( 35) 00:16:07.293 14212.627 - 14317.905: 94.2657% ( 34) 00:16:07.293 14317.905 - 14423.184: 94.5120% ( 32) 00:16:07.293 14423.184 - 14528.463: 94.7583% ( 32) 00:16:07.293 14528.463 - 14633.741: 94.9584% ( 26) 00:16:07.293 14633.741 - 14739.020: 95.1201% ( 21) 00:16:07.293 14739.020 - 14844.299: 95.2663% ( 19) 00:16:07.293 14844.299 - 14949.578: 95.3895% ( 16) 00:16:07.293 14949.578 - 15054.856: 95.5357% ( 19) 00:16:07.293 15054.856 - 15160.135: 95.6820% ( 19) 00:16:07.293 15160.135 - 15265.414: 95.8282% ( 19) 00:16:07.293 15265.414 - 15370.692: 95.9283% ( 13) 00:16:07.293 15370.692 - 15475.971: 96.0206% ( 12) 00:16:07.293 15475.971 - 15581.250: 96.0822% ( 8) 00:16:07.293 15581.250 - 15686.529: 96.1438% ( 8) 00:16:07.293 15686.529 - 15791.807: 96.2054% ( 8) 00:16:07.293 15791.807 - 15897.086: 96.2592% ( 7) 00:16:07.293 15897.086 - 16002.365: 96.3208% ( 8) 00:16:07.293 16002.365 - 16107.643: 96.4209% ( 13) 00:16:07.293 16107.643 - 16212.922: 96.5286% ( 14) 00:16:07.293 16212.922 - 16318.201: 96.6056% ( 10) 00:16:07.293 16318.201 - 16423.480: 96.7057% ( 13) 00:16:07.293 16423.480 - 16528.758: 96.8288% ( 16) 00:16:07.293 16528.758 - 16634.037: 96.9289% ( 13) 00:16:07.293 16634.037 - 16739.316: 97.0443% ( 15) 00:16:07.293 16739.316 - 16844.594: 97.1213% ( 10) 00:16:07.293 16844.594 - 16949.873: 97.2291% ( 14) 00:16:07.293 16949.873 - 17055.152: 97.3214% ( 12) 00:16:07.293 17055.152 - 17160.431: 97.4138% ( 12) 00:16:07.293 17160.431 - 17265.709: 97.5216% ( 14) 00:16:07.293 17265.709 - 17370.988: 97.6370% ( 15) 00:16:07.293 17370.988 - 17476.267: 97.7679% ( 17) 00:16:07.293 17476.267 - 17581.545: 97.8833% ( 15) 00:16:07.293 17581.545 - 17686.824: 97.9988% ( 15) 00:16:07.293 17686.824 - 17792.103: 98.0911% ( 12) 00:16:07.293 17792.103 - 17897.382: 98.1758% ( 11) 00:16:07.293 17897.382 - 18002.660: 98.2528% ( 10) 00:16:07.293 18002.660 - 18107.939: 98.3220% ( 9) 00:16:07.293 18107.939 - 18213.218: 98.3913% ( 9) 00:16:07.293 18213.218 - 18318.496: 98.4529% ( 8) 00:16:07.293 18318.496 - 18423.775: 98.5222% ( 9) 00:16:07.293 18950.169 - 19055.447: 98.5453% ( 3) 00:16:07.293 19055.447 - 19160.726: 98.5914% ( 6) 00:16:07.293 19160.726 - 19266.005: 98.6222% ( 4) 00:16:07.293 19266.005 - 19371.284: 98.6530% ( 4) 00:16:07.293 19371.284 - 19476.562: 98.6838% ( 4) 00:16:07.293 19476.562 - 19581.841: 98.7146% ( 4) 00:16:07.293 19581.841 - 19687.120: 98.7531% ( 5) 00:16:07.293 19687.120 - 19792.398: 98.7839% ( 4) 00:16:07.293 19792.398 - 19897.677: 98.8224% ( 5) 00:16:07.293 19897.677 - 20002.956: 98.8608% ( 5) 00:16:07.293 20002.956 - 20108.235: 98.8993% ( 5) 00:16:07.293 20108.235 - 20213.513: 98.9378% ( 5) 00:16:07.293 20213.513 - 20318.792: 98.9763% ( 5) 00:16:07.293 20318.792 - 20424.071: 99.0071% ( 4) 00:16:07.293 20424.071 - 20529.349: 99.0148% ( 1) 00:16:07.293 41269.256 - 41479.814: 99.0379% ( 3) 00:16:07.293 41479.814 - 41690.371: 99.0994% ( 8) 00:16:07.293 41690.371 - 41900.929: 99.1456% ( 6) 00:16:07.293 41900.929 - 42111.486: 99.1995% ( 7) 00:16:07.293 42111.486 - 42322.043: 99.2457% ( 6) 00:16:07.293 42322.043 - 42532.601: 99.2996% ( 7) 00:16:07.293 42532.601 - 42743.158: 99.3381% ( 5) 00:16:07.293 42743.158 - 42953.716: 99.3919% ( 7) 00:16:07.293 42953.716 - 43164.273: 99.4381% ( 6) 00:16:07.293 43164.273 - 43374.831: 99.4920% ( 7) 00:16:07.293 43374.831 - 43585.388: 99.5074% ( 2) 00:16:07.293 48217.651 - 48428.209: 99.5305% ( 3) 00:16:07.293 48428.209 - 48638.766: 99.5844% ( 7) 00:16:07.293 48638.766 - 48849.324: 99.6305% ( 6) 00:16:07.293 48849.324 - 49059.881: 99.6767% ( 6) 00:16:07.293 49059.881 - 49270.439: 99.7229% ( 6) 00:16:07.293 49270.439 - 49480.996: 99.7768% ( 7) 00:16:07.293 49480.996 - 49691.553: 99.8230% ( 6) 00:16:07.293 49691.553 - 49902.111: 99.8768% ( 7) 00:16:07.293 49902.111 - 50112.668: 99.9307% ( 7) 00:16:07.293 50112.668 - 50323.226: 99.9769% ( 6) 00:16:07.293 50323.226 - 50533.783: 100.0000% ( 3) 00:16:07.293 00:16:07.293 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:16:07.293 ============================================================================== 00:16:07.293 Range in us Cumulative IO count 00:16:07.293 7737.986 - 7790.625: 0.0385% ( 5) 00:16:07.293 7790.625 - 7843.264: 0.1539% ( 15) 00:16:07.293 7843.264 - 7895.904: 0.3849% ( 30) 00:16:07.293 7895.904 - 7948.543: 0.8852% ( 65) 00:16:07.293 7948.543 - 8001.182: 1.5625% ( 88) 00:16:07.293 8001.182 - 8053.822: 2.5785% ( 132) 00:16:07.293 8053.822 - 8106.461: 4.0486% ( 191) 00:16:07.293 8106.461 - 8159.100: 6.1884% ( 278) 00:16:07.293 8159.100 - 8211.740: 8.6746% ( 323) 00:16:07.293 8211.740 - 8264.379: 11.5456% ( 373) 00:16:07.293 8264.379 - 8317.018: 14.4781% ( 381) 00:16:07.294 8317.018 - 8369.658: 17.5493% ( 399) 00:16:07.294 8369.658 - 8422.297: 20.6897% ( 408) 00:16:07.294 8422.297 - 8474.937: 24.1071% ( 444) 00:16:07.294 8474.937 - 8527.576: 27.4323% ( 432) 00:16:07.294 8527.576 - 8580.215: 30.8498% ( 444) 00:16:07.294 8580.215 - 8632.855: 34.3750% ( 458) 00:16:07.294 8632.855 - 8685.494: 37.9156% ( 460) 00:16:07.294 8685.494 - 8738.133: 41.6179% ( 481) 00:16:07.294 8738.133 - 8790.773: 45.1817% ( 463) 00:16:07.294 8790.773 - 8843.412: 48.6145% ( 446) 00:16:07.294 8843.412 - 8896.051: 52.1013% ( 453) 00:16:07.294 8896.051 - 8948.691: 55.3879% ( 427) 00:16:07.294 8948.691 - 9001.330: 58.3975% ( 391) 00:16:07.294 9001.330 - 9053.969: 60.9991% ( 338) 00:16:07.294 9053.969 - 9106.609: 63.1619% ( 281) 00:16:07.294 9106.609 - 9159.248: 64.7552% ( 207) 00:16:07.294 9159.248 - 9211.888: 66.2331% ( 192) 00:16:07.294 9211.888 - 9264.527: 67.5416% ( 170) 00:16:07.294 9264.527 - 9317.166: 68.8424% ( 169) 00:16:07.294 9317.166 - 9369.806: 70.0893% ( 162) 00:16:07.294 9369.806 - 9422.445: 71.2361% ( 149) 00:16:07.294 9422.445 - 9475.084: 72.3368% ( 143) 00:16:07.294 9475.084 - 9527.724: 73.4683% ( 147) 00:16:07.294 9527.724 - 9580.363: 74.4689% ( 130) 00:16:07.294 9580.363 - 9633.002: 75.4233% ( 124) 00:16:07.294 9633.002 - 9685.642: 76.4163% ( 129) 00:16:07.294 9685.642 - 9738.281: 77.2475% ( 108) 00:16:07.294 9738.281 - 9790.920: 78.0480% ( 104) 00:16:07.294 9790.920 - 9843.560: 78.7254% ( 88) 00:16:07.294 9843.560 - 9896.199: 79.3719% ( 84) 00:16:07.294 9896.199 - 9948.839: 79.9646% ( 77) 00:16:07.294 9948.839 - 10001.478: 80.4649% ( 65) 00:16:07.294 10001.478 - 10054.117: 80.9113% ( 58) 00:16:07.294 10054.117 - 10106.757: 81.3193% ( 53) 00:16:07.294 10106.757 - 10159.396: 81.7272% ( 53) 00:16:07.294 10159.396 - 10212.035: 82.1352% ( 53) 00:16:07.294 10212.035 - 10264.675: 82.5123% ( 49) 00:16:07.294 10264.675 - 10317.314: 82.8587% ( 45) 00:16:07.294 10317.314 - 10369.953: 83.1589% ( 39) 00:16:07.294 10369.953 - 10422.593: 83.4206% ( 34) 00:16:07.294 10422.593 - 10475.232: 83.6823% ( 34) 00:16:07.294 10475.232 - 10527.871: 83.9440% ( 34) 00:16:07.294 10527.871 - 10580.511: 84.1595% ( 28) 00:16:07.294 10580.511 - 10633.150: 84.3211% ( 21) 00:16:07.294 10633.150 - 10685.790: 84.4366% ( 15) 00:16:07.294 10685.790 - 10738.429: 84.5751% ( 18) 00:16:07.294 10738.429 - 10791.068: 84.6675% ( 12) 00:16:07.294 10791.068 - 10843.708: 84.7752% ( 14) 00:16:07.294 10843.708 - 10896.347: 84.8753% ( 13) 00:16:07.294 10896.347 - 10948.986: 84.9985% ( 16) 00:16:07.294 10948.986 - 11001.626: 85.1524% ( 20) 00:16:07.294 11001.626 - 11054.265: 85.3217% ( 22) 00:16:07.294 11054.265 - 11106.904: 85.4911% ( 22) 00:16:07.294 11106.904 - 11159.544: 85.6681% ( 23) 00:16:07.294 11159.544 - 11212.183: 85.7990% ( 17) 00:16:07.294 11212.183 - 11264.822: 85.9760% ( 23) 00:16:07.294 11264.822 - 11317.462: 86.1761% ( 26) 00:16:07.294 11317.462 - 11370.101: 86.3531% ( 23) 00:16:07.294 11370.101 - 11422.741: 86.5148% ( 21) 00:16:07.294 11422.741 - 11475.380: 86.6764% ( 21) 00:16:07.294 11475.380 - 11528.019: 86.8534% ( 23) 00:16:07.294 11528.019 - 11580.659: 87.0305% ( 23) 00:16:07.294 11580.659 - 11633.298: 87.1998% ( 22) 00:16:07.294 11633.298 - 11685.937: 87.4076% ( 27) 00:16:07.294 11685.937 - 11738.577: 87.6308% ( 29) 00:16:07.294 11738.577 - 11791.216: 87.8618% ( 30) 00:16:07.294 11791.216 - 11843.855: 88.0927% ( 30) 00:16:07.294 11843.855 - 11896.495: 88.3390% ( 32) 00:16:07.294 11896.495 - 11949.134: 88.5314% ( 25) 00:16:07.294 11949.134 - 12001.773: 88.6930% ( 21) 00:16:07.294 12001.773 - 12054.413: 88.8547% ( 21) 00:16:07.294 12054.413 - 12107.052: 88.9932% ( 18) 00:16:07.294 12107.052 - 12159.692: 89.1472% ( 20) 00:16:07.294 12159.692 - 12212.331: 89.2857% ( 18) 00:16:07.294 12212.331 - 12264.970: 89.4474% ( 21) 00:16:07.294 12264.970 - 12317.610: 89.6475% ( 26) 00:16:07.294 12317.610 - 12370.249: 89.8322% ( 24) 00:16:07.294 12370.249 - 12422.888: 90.0092% ( 23) 00:16:07.294 12422.888 - 12475.528: 90.1786% ( 22) 00:16:07.294 12475.528 - 12528.167: 90.3633% ( 24) 00:16:07.294 12528.167 - 12580.806: 90.4942% ( 17) 00:16:07.294 12580.806 - 12633.446: 90.6558% ( 21) 00:16:07.294 12633.446 - 12686.085: 90.8174% ( 21) 00:16:07.294 12686.085 - 12738.724: 90.9637% ( 19) 00:16:07.294 12738.724 - 12791.364: 91.0868% ( 16) 00:16:07.294 12791.364 - 12844.003: 91.2100% ( 16) 00:16:07.294 12844.003 - 12896.643: 91.3331% ( 16) 00:16:07.294 12896.643 - 12949.282: 91.4486% ( 15) 00:16:07.294 12949.282 - 13001.921: 91.5640% ( 15) 00:16:07.294 13001.921 - 13054.561: 91.6795% ( 15) 00:16:07.294 13054.561 - 13107.200: 91.7873% ( 14) 00:16:07.294 13107.200 - 13159.839: 91.9027% ( 15) 00:16:07.294 13159.839 - 13212.479: 92.0259% ( 16) 00:16:07.294 13212.479 - 13265.118: 92.1644% ( 18) 00:16:07.294 13265.118 - 13317.757: 92.2953% ( 17) 00:16:07.294 13317.757 - 13370.397: 92.3953% ( 13) 00:16:07.294 13370.397 - 13423.036: 92.4877% ( 12) 00:16:07.294 13423.036 - 13475.676: 92.5877% ( 13) 00:16:07.294 13475.676 - 13580.954: 92.7571% ( 22) 00:16:07.294 13580.954 - 13686.233: 92.9264% ( 22) 00:16:07.294 13686.233 - 13791.512: 93.1111% ( 24) 00:16:07.294 13791.512 - 13896.790: 93.3498% ( 31) 00:16:07.294 13896.790 - 14002.069: 93.5730% ( 29) 00:16:07.294 14002.069 - 14107.348: 93.7962% ( 29) 00:16:07.294 14107.348 - 14212.627: 93.9501% ( 20) 00:16:07.294 14212.627 - 14317.905: 94.1349% ( 24) 00:16:07.294 14317.905 - 14423.184: 94.2888% ( 20) 00:16:07.294 14423.184 - 14528.463: 94.4658% ( 23) 00:16:07.294 14528.463 - 14633.741: 94.6352% ( 22) 00:16:07.294 14633.741 - 14739.020: 94.8199% ( 24) 00:16:07.294 14739.020 - 14844.299: 95.0200% ( 26) 00:16:07.294 14844.299 - 14949.578: 95.2355% ( 28) 00:16:07.294 14949.578 - 15054.856: 95.3818% ( 19) 00:16:07.294 15054.856 - 15160.135: 95.5588% ( 23) 00:16:07.294 15160.135 - 15265.414: 95.7435% ( 24) 00:16:07.294 15265.414 - 15370.692: 95.9206% ( 23) 00:16:07.294 15370.692 - 15475.971: 96.0899% ( 22) 00:16:07.294 15475.971 - 15581.250: 96.2823% ( 25) 00:16:07.294 15581.250 - 15686.529: 96.4594% ( 23) 00:16:07.294 15686.529 - 15791.807: 96.6056% ( 19) 00:16:07.294 15791.807 - 15897.086: 96.7288% ( 16) 00:16:07.294 15897.086 - 16002.365: 96.8750% ( 19) 00:16:07.294 16002.365 - 16107.643: 96.9828% ( 14) 00:16:07.294 16107.643 - 16212.922: 97.0597% ( 10) 00:16:07.294 16212.922 - 16318.201: 97.1059% ( 6) 00:16:07.294 16318.201 - 16423.480: 97.1598% ( 7) 00:16:07.294 16423.480 - 16528.758: 97.1906% ( 4) 00:16:07.294 16528.758 - 16634.037: 97.2291% ( 5) 00:16:07.294 16634.037 - 16739.316: 97.3060% ( 10) 00:16:07.294 16739.316 - 16844.594: 97.3830% ( 10) 00:16:07.294 16844.594 - 16949.873: 97.4523% ( 9) 00:16:07.294 16949.873 - 17055.152: 97.5292% ( 10) 00:16:07.294 17055.152 - 17160.431: 97.5908% ( 8) 00:16:07.294 17160.431 - 17265.709: 97.6601% ( 9) 00:16:07.294 17265.709 - 17370.988: 97.7294% ( 9) 00:16:07.294 17370.988 - 17476.267: 97.7986% ( 9) 00:16:07.294 17476.267 - 17581.545: 97.8833% ( 11) 00:16:07.294 17581.545 - 17686.824: 97.9757% ( 12) 00:16:07.294 17686.824 - 17792.103: 98.0450% ( 9) 00:16:07.294 17792.103 - 17897.382: 98.1296% ( 11) 00:16:07.294 17897.382 - 18002.660: 98.1835% ( 7) 00:16:07.294 18002.660 - 18107.939: 98.2220% ( 5) 00:16:07.294 18107.939 - 18213.218: 98.2759% ( 7) 00:16:07.294 18213.218 - 18318.496: 98.3605% ( 11) 00:16:07.294 18318.496 - 18423.775: 98.4375% ( 10) 00:16:07.294 18423.775 - 18529.054: 98.5145% ( 10) 00:16:07.294 18529.054 - 18634.333: 98.5914% ( 10) 00:16:07.294 18634.333 - 18739.611: 98.6684% ( 10) 00:16:07.294 18739.611 - 18844.890: 98.7454% ( 10) 00:16:07.294 18844.890 - 18950.169: 98.7916% ( 6) 00:16:07.294 18950.169 - 19055.447: 98.8300% ( 5) 00:16:07.294 19055.447 - 19160.726: 98.8608% ( 4) 00:16:07.294 19160.726 - 19266.005: 98.8993% ( 5) 00:16:07.294 19266.005 - 19371.284: 98.9378% ( 5) 00:16:07.294 19371.284 - 19476.562: 98.9686% ( 4) 00:16:07.294 19476.562 - 19581.841: 99.0071% ( 5) 00:16:07.294 19581.841 - 19687.120: 99.0148% ( 1) 00:16:07.294 39584.797 - 39795.354: 99.0533% ( 5) 00:16:07.294 39795.354 - 40005.912: 99.0994% ( 6) 00:16:07.294 40005.912 - 40216.469: 99.1533% ( 7) 00:16:07.294 40216.469 - 40427.027: 99.2072% ( 7) 00:16:07.294 40427.027 - 40637.584: 99.2534% ( 6) 00:16:07.294 40637.584 - 40848.141: 99.3073% ( 7) 00:16:07.294 40848.141 - 41058.699: 99.3534% ( 6) 00:16:07.294 41058.699 - 41269.256: 99.4073% ( 7) 00:16:07.294 41269.256 - 41479.814: 99.4535% ( 6) 00:16:07.294 41479.814 - 41690.371: 99.5074% ( 7) 00:16:07.294 46322.635 - 46533.192: 99.5459% ( 5) 00:16:07.294 46533.192 - 46743.749: 99.5998% ( 7) 00:16:07.294 46743.749 - 46954.307: 99.6459% ( 6) 00:16:07.294 46954.307 - 47164.864: 99.6921% ( 6) 00:16:07.294 47164.864 - 47375.422: 99.7383% ( 6) 00:16:07.294 47375.422 - 47585.979: 99.7922% ( 7) 00:16:07.294 47585.979 - 47796.537: 99.8384% ( 6) 00:16:07.294 47796.537 - 48007.094: 99.8922% ( 7) 00:16:07.294 48007.094 - 48217.651: 99.9538% ( 8) 00:16:07.294 48217.651 - 48428.209: 99.9923% ( 5) 00:16:07.294 48428.209 - 48638.766: 100.0000% ( 1) 00:16:07.294 00:16:07.294 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:16:07.294 ============================================================================== 00:16:07.294 Range in us Cumulative IO count 00:16:07.294 7737.986 - 7790.625: 0.0539% ( 7) 00:16:07.294 7790.625 - 7843.264: 0.1616% ( 14) 00:16:07.294 7843.264 - 7895.904: 0.4233% ( 34) 00:16:07.294 7895.904 - 7948.543: 0.8929% ( 61) 00:16:07.294 7948.543 - 8001.182: 1.5394% ( 84) 00:16:07.294 8001.182 - 8053.822: 2.5554% ( 132) 00:16:07.294 8053.822 - 8106.461: 4.1025% ( 201) 00:16:07.294 8106.461 - 8159.100: 6.1422% ( 265) 00:16:07.294 8159.100 - 8211.740: 8.6823% ( 330) 00:16:07.294 8211.740 - 8264.379: 11.3762% ( 350) 00:16:07.294 8264.379 - 8317.018: 14.3473% ( 386) 00:16:07.294 8317.018 - 8369.658: 17.5800% ( 420) 00:16:07.294 8369.658 - 8422.297: 20.7974% ( 418) 00:16:07.295 8422.297 - 8474.937: 23.9763% ( 413) 00:16:07.295 8474.937 - 8527.576: 27.3938% ( 444) 00:16:07.295 8527.576 - 8580.215: 30.9344% ( 460) 00:16:07.295 8580.215 - 8632.855: 34.4674% ( 459) 00:16:07.295 8632.855 - 8685.494: 37.9772% ( 456) 00:16:07.295 8685.494 - 8738.133: 41.6872% ( 482) 00:16:07.295 8738.133 - 8790.773: 45.2740% ( 466) 00:16:07.295 8790.773 - 8843.412: 48.8839% ( 469) 00:16:07.295 8843.412 - 8896.051: 52.3091% ( 445) 00:16:07.295 8896.051 - 8948.691: 55.5342% ( 419) 00:16:07.295 8948.691 - 9001.330: 58.4437% ( 378) 00:16:07.295 9001.330 - 9053.969: 61.0222% ( 335) 00:16:07.295 9053.969 - 9106.609: 62.9849% ( 255) 00:16:07.295 9106.609 - 9159.248: 64.7398% ( 228) 00:16:07.295 9159.248 - 9211.888: 66.3331% ( 207) 00:16:07.295 9211.888 - 9264.527: 67.6647% ( 173) 00:16:07.295 9264.527 - 9317.166: 68.9270% ( 164) 00:16:07.295 9317.166 - 9369.806: 70.1970% ( 165) 00:16:07.295 9369.806 - 9422.445: 71.4517% ( 163) 00:16:07.295 9422.445 - 9475.084: 72.5139% ( 138) 00:16:07.295 9475.084 - 9527.724: 73.5453% ( 134) 00:16:07.295 9527.724 - 9580.363: 74.5151% ( 126) 00:16:07.295 9580.363 - 9633.002: 75.4310% ( 119) 00:16:07.295 9633.002 - 9685.642: 76.2315% ( 104) 00:16:07.295 9685.642 - 9738.281: 76.9858% ( 98) 00:16:07.295 9738.281 - 9790.920: 77.6863% ( 91) 00:16:07.295 9790.920 - 9843.560: 78.3482% ( 86) 00:16:07.295 9843.560 - 9896.199: 78.9101% ( 73) 00:16:07.295 9896.199 - 9948.839: 79.5182% ( 79) 00:16:07.295 9948.839 - 10001.478: 79.9877% ( 61) 00:16:07.295 10001.478 - 10054.117: 80.3956% ( 53) 00:16:07.295 10054.117 - 10106.757: 80.8113% ( 54) 00:16:07.295 10106.757 - 10159.396: 81.2192% ( 53) 00:16:07.295 10159.396 - 10212.035: 81.6041% ( 50) 00:16:07.295 10212.035 - 10264.675: 81.9427% ( 44) 00:16:07.295 10264.675 - 10317.314: 82.2352% ( 38) 00:16:07.295 10317.314 - 10369.953: 82.5354% ( 39) 00:16:07.295 10369.953 - 10422.593: 82.8356% ( 39) 00:16:07.295 10422.593 - 10475.232: 83.0511% ( 28) 00:16:07.295 10475.232 - 10527.871: 83.3051% ( 33) 00:16:07.295 10527.871 - 10580.511: 83.5283% ( 29) 00:16:07.295 10580.511 - 10633.150: 83.7900% ( 34) 00:16:07.295 10633.150 - 10685.790: 84.0055% ( 28) 00:16:07.295 10685.790 - 10738.429: 84.1672% ( 21) 00:16:07.295 10738.429 - 10791.068: 84.3365% ( 22) 00:16:07.295 10791.068 - 10843.708: 84.4674% ( 17) 00:16:07.295 10843.708 - 10896.347: 84.6136% ( 19) 00:16:07.295 10896.347 - 10948.986: 84.7675% ( 20) 00:16:07.295 10948.986 - 11001.626: 84.9446% ( 23) 00:16:07.295 11001.626 - 11054.265: 85.0985% ( 20) 00:16:07.295 11054.265 - 11106.904: 85.2986% ( 26) 00:16:07.295 11106.904 - 11159.544: 85.4834% ( 24) 00:16:07.295 11159.544 - 11212.183: 85.7066% ( 29) 00:16:07.295 11212.183 - 11264.822: 85.8913% ( 24) 00:16:07.295 11264.822 - 11317.462: 86.0607% ( 22) 00:16:07.295 11317.462 - 11370.101: 86.2608% ( 26) 00:16:07.295 11370.101 - 11422.741: 86.4609% ( 26) 00:16:07.295 11422.741 - 11475.380: 86.6379% ( 23) 00:16:07.295 11475.380 - 11528.019: 86.8458% ( 27) 00:16:07.295 11528.019 - 11580.659: 87.0613% ( 28) 00:16:07.295 11580.659 - 11633.298: 87.2845% ( 29) 00:16:07.295 11633.298 - 11685.937: 87.5385% ( 33) 00:16:07.295 11685.937 - 11738.577: 87.7771% ( 31) 00:16:07.295 11738.577 - 11791.216: 88.0388% ( 34) 00:16:07.295 11791.216 - 11843.855: 88.2543% ( 28) 00:16:07.295 11843.855 - 11896.495: 88.4852% ( 30) 00:16:07.295 11896.495 - 11949.134: 88.6930% ( 27) 00:16:07.295 11949.134 - 12001.773: 88.8932% ( 26) 00:16:07.295 12001.773 - 12054.413: 89.1010% ( 27) 00:16:07.295 12054.413 - 12107.052: 89.2549% ( 20) 00:16:07.295 12107.052 - 12159.692: 89.4243% ( 22) 00:16:07.295 12159.692 - 12212.331: 89.6013% ( 23) 00:16:07.295 12212.331 - 12264.970: 89.7706% ( 22) 00:16:07.295 12264.970 - 12317.610: 89.9708% ( 26) 00:16:07.295 12317.610 - 12370.249: 90.1478% ( 23) 00:16:07.295 12370.249 - 12422.888: 90.3479% ( 26) 00:16:07.295 12422.888 - 12475.528: 90.5172% ( 22) 00:16:07.295 12475.528 - 12528.167: 90.7020% ( 24) 00:16:07.295 12528.167 - 12580.806: 90.8559% ( 20) 00:16:07.295 12580.806 - 12633.446: 91.0329% ( 23) 00:16:07.295 12633.446 - 12686.085: 91.1561% ( 16) 00:16:07.295 12686.085 - 12738.724: 91.2792% ( 16) 00:16:07.295 12738.724 - 12791.364: 91.3947% ( 15) 00:16:07.295 12791.364 - 12844.003: 91.5102% ( 15) 00:16:07.295 12844.003 - 12896.643: 91.6256% ( 15) 00:16:07.295 12896.643 - 12949.282: 91.7334% ( 14) 00:16:07.295 12949.282 - 13001.921: 91.8257% ( 12) 00:16:07.295 13001.921 - 13054.561: 91.9335% ( 14) 00:16:07.295 13054.561 - 13107.200: 92.0567% ( 16) 00:16:07.295 13107.200 - 13159.839: 92.1721% ( 15) 00:16:07.295 13159.839 - 13212.479: 92.3030% ( 17) 00:16:07.295 13212.479 - 13265.118: 92.3953% ( 12) 00:16:07.295 13265.118 - 13317.757: 92.4954% ( 13) 00:16:07.295 13317.757 - 13370.397: 92.5724% ( 10) 00:16:07.295 13370.397 - 13423.036: 92.6339% ( 8) 00:16:07.295 13423.036 - 13475.676: 92.6955% ( 8) 00:16:07.295 13475.676 - 13580.954: 92.7725% ( 10) 00:16:07.295 13580.954 - 13686.233: 92.8341% ( 8) 00:16:07.295 13686.233 - 13791.512: 92.8956% ( 8) 00:16:07.295 13791.512 - 13896.790: 93.0265% ( 17) 00:16:07.295 13896.790 - 14002.069: 93.1650% ( 18) 00:16:07.295 14002.069 - 14107.348: 93.3498% ( 24) 00:16:07.295 14107.348 - 14212.627: 93.6268% ( 36) 00:16:07.295 14212.627 - 14317.905: 93.8424% ( 28) 00:16:07.295 14317.905 - 14423.184: 94.0579% ( 28) 00:16:07.295 14423.184 - 14528.463: 94.2503% ( 25) 00:16:07.295 14528.463 - 14633.741: 94.4735% ( 29) 00:16:07.295 14633.741 - 14739.020: 94.7737% ( 39) 00:16:07.295 14739.020 - 14844.299: 95.0431% ( 35) 00:16:07.295 14844.299 - 14949.578: 95.2740% ( 30) 00:16:07.295 14949.578 - 15054.856: 95.5049% ( 30) 00:16:07.295 15054.856 - 15160.135: 95.6589% ( 20) 00:16:07.295 15160.135 - 15265.414: 95.7897% ( 17) 00:16:07.295 15265.414 - 15370.692: 95.9052% ( 15) 00:16:07.295 15370.692 - 15475.971: 96.0052% ( 13) 00:16:07.295 15475.971 - 15581.250: 96.0976% ( 12) 00:16:07.295 15581.250 - 15686.529: 96.1977% ( 13) 00:16:07.295 15686.529 - 15791.807: 96.2900% ( 12) 00:16:07.295 15791.807 - 15897.086: 96.4132% ( 16) 00:16:07.295 15897.086 - 16002.365: 96.5132% ( 13) 00:16:07.295 16002.365 - 16107.643: 96.6133% ( 13) 00:16:07.295 16107.643 - 16212.922: 96.7518% ( 18) 00:16:07.295 16212.922 - 16318.201: 96.8981% ( 19) 00:16:07.295 16318.201 - 16423.480: 97.0520% ( 20) 00:16:07.295 16423.480 - 16528.758: 97.1983% ( 19) 00:16:07.295 16528.758 - 16634.037: 97.3291% ( 17) 00:16:07.295 16634.037 - 16739.316: 97.4369% ( 14) 00:16:07.295 16739.316 - 16844.594: 97.5216% ( 11) 00:16:07.295 16844.594 - 16949.873: 97.5908% ( 9) 00:16:07.295 16949.873 - 17055.152: 97.6678% ( 10) 00:16:07.295 17055.152 - 17160.431: 97.7371% ( 9) 00:16:07.295 17160.431 - 17265.709: 97.7756% ( 5) 00:16:07.295 17265.709 - 17370.988: 97.8063% ( 4) 00:16:07.295 17370.988 - 17476.267: 97.8371% ( 4) 00:16:07.295 17476.267 - 17581.545: 97.8602% ( 3) 00:16:07.295 17581.545 - 17686.824: 97.8833% ( 3) 00:16:07.295 17686.824 - 17792.103: 97.9141% ( 4) 00:16:07.295 17792.103 - 17897.382: 97.9680% ( 7) 00:16:07.295 17897.382 - 18002.660: 98.0450% ( 10) 00:16:07.295 18002.660 - 18107.939: 98.0988% ( 7) 00:16:07.295 18107.939 - 18213.218: 98.1681% ( 9) 00:16:07.295 18213.218 - 18318.496: 98.2066% ( 5) 00:16:07.295 18318.496 - 18423.775: 98.2759% ( 9) 00:16:07.295 18423.775 - 18529.054: 98.3528% ( 10) 00:16:07.295 18529.054 - 18634.333: 98.4298% ( 10) 00:16:07.295 18634.333 - 18739.611: 98.5068% ( 10) 00:16:07.295 18739.611 - 18844.890: 98.5837% ( 10) 00:16:07.295 18844.890 - 18950.169: 98.6607% ( 10) 00:16:07.295 18950.169 - 19055.447: 98.7300% ( 9) 00:16:07.295 19055.447 - 19160.726: 98.8147% ( 11) 00:16:07.295 19160.726 - 19266.005: 98.8916% ( 10) 00:16:07.295 19266.005 - 19371.284: 98.9378% ( 6) 00:16:07.295 19371.284 - 19476.562: 98.9763% ( 5) 00:16:07.295 19476.562 - 19581.841: 99.0148% ( 5) 00:16:07.295 37268.665 - 37479.222: 99.0379% ( 3) 00:16:07.295 37479.222 - 37689.780: 99.0917% ( 7) 00:16:07.295 37689.780 - 37900.337: 99.1379% ( 6) 00:16:07.295 37900.337 - 38110.895: 99.1995% ( 8) 00:16:07.295 38110.895 - 38321.452: 99.2457% ( 6) 00:16:07.295 38321.452 - 38532.010: 99.2919% ( 6) 00:16:07.295 38532.010 - 38742.567: 99.3534% ( 8) 00:16:07.295 38742.567 - 38953.124: 99.3996% ( 6) 00:16:07.295 38953.124 - 39163.682: 99.4458% ( 6) 00:16:07.295 39163.682 - 39374.239: 99.5074% ( 8) 00:16:07.295 43795.945 - 44006.503: 99.5151% ( 1) 00:16:07.295 44006.503 - 44217.060: 99.5613% ( 6) 00:16:07.296 44217.060 - 44427.618: 99.6151% ( 7) 00:16:07.296 44427.618 - 44638.175: 99.6690% ( 7) 00:16:07.296 44638.175 - 44848.733: 99.7152% ( 6) 00:16:07.296 44848.733 - 45059.290: 99.7691% ( 7) 00:16:07.296 45059.290 - 45269.847: 99.8230% ( 7) 00:16:07.296 45269.847 - 45480.405: 99.8692% ( 6) 00:16:07.296 45480.405 - 45690.962: 99.9230% ( 7) 00:16:07.296 45690.962 - 45901.520: 99.9692% ( 6) 00:16:07.296 45901.520 - 46112.077: 100.0000% ( 4) 00:16:07.296 00:16:07.296 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:16:07.296 ============================================================================== 00:16:07.296 Range in us Cumulative IO count 00:16:07.296 7737.986 - 7790.625: 0.0231% ( 3) 00:16:07.296 7790.625 - 7843.264: 0.1616% ( 18) 00:16:07.296 7843.264 - 7895.904: 0.4387% ( 36) 00:16:07.296 7895.904 - 7948.543: 0.9236% ( 63) 00:16:07.296 7948.543 - 8001.182: 1.6010% ( 88) 00:16:07.296 8001.182 - 8053.822: 2.5862% ( 128) 00:16:07.296 8053.822 - 8106.461: 3.9563% ( 178) 00:16:07.296 8106.461 - 8159.100: 6.1115% ( 280) 00:16:07.296 8159.100 - 8211.740: 8.6130% ( 325) 00:16:07.296 8211.740 - 8264.379: 11.4070% ( 363) 00:16:07.296 8264.379 - 8317.018: 14.4858% ( 400) 00:16:07.296 8317.018 - 8369.658: 17.6031% ( 405) 00:16:07.296 8369.658 - 8422.297: 20.9129% ( 430) 00:16:07.296 8422.297 - 8474.937: 24.1148% ( 416) 00:16:07.296 8474.937 - 8527.576: 27.4400% ( 432) 00:16:07.296 8527.576 - 8580.215: 30.9575% ( 457) 00:16:07.296 8580.215 - 8632.855: 34.3442% ( 440) 00:16:07.296 8632.855 - 8685.494: 37.9156% ( 464) 00:16:07.296 8685.494 - 8738.133: 41.6256% ( 482) 00:16:07.296 8738.133 - 8790.773: 45.1970% ( 464) 00:16:07.296 8790.773 - 8843.412: 48.6607% ( 450) 00:16:07.296 8843.412 - 8896.051: 52.1167% ( 449) 00:16:07.296 8896.051 - 8948.691: 55.4726% ( 436) 00:16:07.296 8948.691 - 9001.330: 58.4898% ( 392) 00:16:07.296 9001.330 - 9053.969: 61.1684% ( 348) 00:16:07.296 9053.969 - 9106.609: 63.4236% ( 293) 00:16:07.296 9106.609 - 9159.248: 65.2786% ( 241) 00:16:07.296 9159.248 - 9211.888: 66.7180% ( 187) 00:16:07.296 9211.888 - 9264.527: 68.0573% ( 174) 00:16:07.296 9264.527 - 9317.166: 69.3042% ( 162) 00:16:07.296 9317.166 - 9369.806: 70.5357% ( 160) 00:16:07.296 9369.806 - 9422.445: 71.6672% ( 147) 00:16:07.296 9422.445 - 9475.084: 72.7756% ( 144) 00:16:07.296 9475.084 - 9527.724: 73.8300% ( 137) 00:16:07.296 9527.724 - 9580.363: 74.8692% ( 135) 00:16:07.296 9580.363 - 9633.002: 75.8082% ( 122) 00:16:07.296 9633.002 - 9685.642: 76.5779% ( 100) 00:16:07.296 9685.642 - 9738.281: 77.2860% ( 92) 00:16:07.296 9738.281 - 9790.920: 77.9172% ( 82) 00:16:07.296 9790.920 - 9843.560: 78.5714% ( 85) 00:16:07.296 9843.560 - 9896.199: 79.1102% ( 70) 00:16:07.296 9896.199 - 9948.839: 79.6336% ( 68) 00:16:07.296 9948.839 - 10001.478: 80.0724% ( 57) 00:16:07.296 10001.478 - 10054.117: 80.4264% ( 46) 00:16:07.296 10054.117 - 10106.757: 80.7959% ( 48) 00:16:07.296 10106.757 - 10159.396: 81.1422% ( 45) 00:16:07.296 10159.396 - 10212.035: 81.4578% ( 41) 00:16:07.296 10212.035 - 10264.675: 81.7426% ( 37) 00:16:07.296 10264.675 - 10317.314: 82.0736% ( 43) 00:16:07.296 10317.314 - 10369.953: 82.3584% ( 37) 00:16:07.296 10369.953 - 10422.593: 82.6124% ( 33) 00:16:07.296 10422.593 - 10475.232: 82.8202% ( 27) 00:16:07.296 10475.232 - 10527.871: 83.0357% ( 28) 00:16:07.296 10527.871 - 10580.511: 83.2358% ( 26) 00:16:07.296 10580.511 - 10633.150: 83.4283% ( 25) 00:16:07.296 10633.150 - 10685.790: 83.6438% ( 28) 00:16:07.296 10685.790 - 10738.429: 83.7900% ( 19) 00:16:07.296 10738.429 - 10791.068: 83.9286% ( 18) 00:16:07.296 10791.068 - 10843.708: 84.0748% ( 19) 00:16:07.296 10843.708 - 10896.347: 84.2211% ( 19) 00:16:07.296 10896.347 - 10948.986: 84.3442% ( 16) 00:16:07.296 10948.986 - 11001.626: 84.5058% ( 21) 00:16:07.296 11001.626 - 11054.265: 84.6598% ( 20) 00:16:07.296 11054.265 - 11106.904: 84.8445% ( 24) 00:16:07.296 11106.904 - 11159.544: 85.0446% ( 26) 00:16:07.296 11159.544 - 11212.183: 85.2602% ( 28) 00:16:07.296 11212.183 - 11264.822: 85.4834% ( 29) 00:16:07.296 11264.822 - 11317.462: 85.7220% ( 31) 00:16:07.296 11317.462 - 11370.101: 85.9606% ( 31) 00:16:07.296 11370.101 - 11422.741: 86.2069% ( 32) 00:16:07.296 11422.741 - 11475.380: 86.4378% ( 30) 00:16:07.296 11475.380 - 11528.019: 86.6995% ( 34) 00:16:07.296 11528.019 - 11580.659: 86.9304% ( 30) 00:16:07.296 11580.659 - 11633.298: 87.1690% ( 31) 00:16:07.296 11633.298 - 11685.937: 87.4230% ( 33) 00:16:07.296 11685.937 - 11738.577: 87.7078% ( 37) 00:16:07.296 11738.577 - 11791.216: 87.9618% ( 33) 00:16:07.296 11791.216 - 11843.855: 88.2312% ( 35) 00:16:07.296 11843.855 - 11896.495: 88.4852% ( 33) 00:16:07.296 11896.495 - 11949.134: 88.7469% ( 34) 00:16:07.296 11949.134 - 12001.773: 88.9932% ( 32) 00:16:07.296 12001.773 - 12054.413: 89.2241% ( 30) 00:16:07.296 12054.413 - 12107.052: 89.4243% ( 26) 00:16:07.296 12107.052 - 12159.692: 89.6244% ( 26) 00:16:07.296 12159.692 - 12212.331: 89.8014% ( 23) 00:16:07.296 12212.331 - 12264.970: 89.9861% ( 24) 00:16:07.296 12264.970 - 12317.610: 90.1786% ( 25) 00:16:07.296 12317.610 - 12370.249: 90.3633% ( 24) 00:16:07.296 12370.249 - 12422.888: 90.5326% ( 22) 00:16:07.296 12422.888 - 12475.528: 90.6866% ( 20) 00:16:07.296 12475.528 - 12528.167: 90.8482% ( 21) 00:16:07.296 12528.167 - 12580.806: 91.0022% ( 20) 00:16:07.296 12580.806 - 12633.446: 91.1176% ( 15) 00:16:07.296 12633.446 - 12686.085: 91.2331% ( 15) 00:16:07.296 12686.085 - 12738.724: 91.3100% ( 10) 00:16:07.296 12738.724 - 12791.364: 91.3870% ( 10) 00:16:07.296 12791.364 - 12844.003: 91.4640% ( 10) 00:16:07.296 12844.003 - 12896.643: 91.5563% ( 12) 00:16:07.296 12896.643 - 12949.282: 91.6564% ( 13) 00:16:07.296 12949.282 - 13001.921: 91.7257% ( 9) 00:16:07.296 13001.921 - 13054.561: 91.7796% ( 7) 00:16:07.296 13054.561 - 13107.200: 91.8334% ( 7) 00:16:07.296 13107.200 - 13159.839: 91.8950% ( 8) 00:16:07.296 13159.839 - 13212.479: 91.9566% ( 8) 00:16:07.296 13212.479 - 13265.118: 92.0182% ( 8) 00:16:07.296 13265.118 - 13317.757: 92.0797% ( 8) 00:16:07.296 13317.757 - 13370.397: 92.1490% ( 9) 00:16:07.296 13370.397 - 13423.036: 92.2183% ( 9) 00:16:07.296 13423.036 - 13475.676: 92.2953% ( 10) 00:16:07.296 13475.676 - 13580.954: 92.4107% ( 15) 00:16:07.296 13580.954 - 13686.233: 92.5493% ( 18) 00:16:07.296 13686.233 - 13791.512: 92.7494% ( 26) 00:16:07.296 13791.512 - 13896.790: 93.0188% ( 35) 00:16:07.296 13896.790 - 14002.069: 93.2882% ( 35) 00:16:07.296 14002.069 - 14107.348: 93.5576% ( 35) 00:16:07.296 14107.348 - 14212.627: 93.7962% ( 31) 00:16:07.296 14212.627 - 14317.905: 94.0656% ( 35) 00:16:07.296 14317.905 - 14423.184: 94.3196% ( 33) 00:16:07.296 14423.184 - 14528.463: 94.5120% ( 25) 00:16:07.296 14528.463 - 14633.741: 94.6890% ( 23) 00:16:07.296 14633.741 - 14739.020: 94.9123% ( 29) 00:16:07.296 14739.020 - 14844.299: 95.1432% ( 30) 00:16:07.296 14844.299 - 14949.578: 95.3202% ( 23) 00:16:07.296 14949.578 - 15054.856: 95.4510% ( 17) 00:16:07.296 15054.856 - 15160.135: 95.5742% ( 16) 00:16:07.296 15160.135 - 15265.414: 95.7127% ( 18) 00:16:07.296 15265.414 - 15370.692: 95.7743% ( 8) 00:16:07.296 15370.692 - 15475.971: 95.8744% ( 13) 00:16:07.296 15475.971 - 15581.250: 95.9514% ( 10) 00:16:07.296 15581.250 - 15686.529: 96.0437% ( 12) 00:16:07.296 15686.529 - 15791.807: 96.1438% ( 13) 00:16:07.296 15791.807 - 15897.086: 96.2977% ( 20) 00:16:07.296 15897.086 - 16002.365: 96.4209% ( 16) 00:16:07.296 16002.365 - 16107.643: 96.4901% ( 9) 00:16:07.296 16107.643 - 16212.922: 96.6056% ( 15) 00:16:07.296 16212.922 - 16318.201: 96.7288% ( 16) 00:16:07.296 16318.201 - 16423.480: 96.8442% ( 15) 00:16:07.296 16423.480 - 16528.758: 96.9597% ( 15) 00:16:07.296 16528.758 - 16634.037: 97.0828% ( 16) 00:16:07.296 16634.037 - 16739.316: 97.2445% ( 21) 00:16:07.296 16739.316 - 16844.594: 97.3984% ( 20) 00:16:07.296 16844.594 - 16949.873: 97.4985% ( 13) 00:16:07.296 16949.873 - 17055.152: 97.5985% ( 13) 00:16:07.296 17055.152 - 17160.431: 97.6678% ( 9) 00:16:07.296 17160.431 - 17265.709: 97.7217% ( 7) 00:16:07.296 17265.709 - 17370.988: 97.7909% ( 9) 00:16:07.296 17370.988 - 17476.267: 97.8602% ( 9) 00:16:07.296 17476.267 - 17581.545: 97.9141% ( 7) 00:16:07.296 17581.545 - 17686.824: 97.9757% ( 8) 00:16:07.296 17686.824 - 17792.103: 98.0373% ( 8) 00:16:07.296 17792.103 - 17897.382: 98.0988% ( 8) 00:16:07.296 17897.382 - 18002.660: 98.1604% ( 8) 00:16:07.296 18002.660 - 18107.939: 98.2143% ( 7) 00:16:07.296 18107.939 - 18213.218: 98.2836% ( 9) 00:16:07.296 18213.218 - 18318.496: 98.3451% ( 8) 00:16:07.296 18318.496 - 18423.775: 98.3990% ( 7) 00:16:07.296 18423.775 - 18529.054: 98.4375% ( 5) 00:16:07.296 18529.054 - 18634.333: 98.4683% ( 4) 00:16:07.296 18634.333 - 18739.611: 98.4991% ( 4) 00:16:07.296 18739.611 - 18844.890: 98.5453% ( 6) 00:16:07.296 18844.890 - 18950.169: 98.5760% ( 4) 00:16:07.296 18950.169 - 19055.447: 98.6145% ( 5) 00:16:07.296 19055.447 - 19160.726: 98.6607% ( 6) 00:16:07.296 19160.726 - 19266.005: 98.6915% ( 4) 00:16:07.296 19266.005 - 19371.284: 98.7223% ( 4) 00:16:07.296 19371.284 - 19476.562: 98.7608% ( 5) 00:16:07.296 19476.562 - 19581.841: 98.7916% ( 4) 00:16:07.296 19581.841 - 19687.120: 98.8224% ( 4) 00:16:07.296 19687.120 - 19792.398: 98.8608% ( 5) 00:16:07.296 19792.398 - 19897.677: 98.8993% ( 5) 00:16:07.296 19897.677 - 20002.956: 98.9301% ( 4) 00:16:07.296 20002.956 - 20108.235: 98.9686% ( 5) 00:16:07.296 20108.235 - 20213.513: 98.9994% ( 4) 00:16:07.296 20213.513 - 20318.792: 99.0148% ( 2) 00:16:07.296 34952.533 - 35163.091: 99.0379% ( 3) 00:16:07.296 35163.091 - 35373.648: 99.0917% ( 7) 00:16:07.296 35373.648 - 35584.206: 99.1302% ( 5) 00:16:07.296 35584.206 - 35794.763: 99.1841% ( 7) 00:16:07.296 35794.763 - 36005.320: 99.2303% ( 6) 00:16:07.296 36005.320 - 36215.878: 99.2919% ( 8) 00:16:07.297 36215.878 - 36426.435: 99.3381% ( 6) 00:16:07.297 36426.435 - 36636.993: 99.3919% ( 7) 00:16:07.297 36636.993 - 36847.550: 99.4458% ( 7) 00:16:07.297 36847.550 - 37058.108: 99.4997% ( 7) 00:16:07.297 37058.108 - 37268.665: 99.5074% ( 1) 00:16:07.297 41479.814 - 41690.371: 99.5536% ( 6) 00:16:07.297 41690.371 - 41900.929: 99.6075% ( 7) 00:16:07.297 41900.929 - 42111.486: 99.6305% ( 3) 00:16:07.297 42111.486 - 42322.043: 99.6613% ( 4) 00:16:07.297 42322.043 - 42532.601: 99.7152% ( 7) 00:16:07.297 42532.601 - 42743.158: 99.7614% ( 6) 00:16:07.297 42743.158 - 42953.716: 99.8153% ( 7) 00:16:07.297 42953.716 - 43164.273: 99.8692% ( 7) 00:16:07.297 43164.273 - 43374.831: 99.9230% ( 7) 00:16:07.297 43374.831 - 43585.388: 99.9769% ( 7) 00:16:07.297 43585.388 - 43795.945: 100.0000% ( 3) 00:16:07.297 00:16:07.297 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:16:07.297 ============================================================================== 00:16:07.297 Range in us Cumulative IO count 00:16:07.297 7737.986 - 7790.625: 0.0153% ( 2) 00:16:07.297 7790.625 - 7843.264: 0.1455% ( 17) 00:16:07.297 7843.264 - 7895.904: 0.4366% ( 38) 00:16:07.297 7895.904 - 7948.543: 0.8349% ( 52) 00:16:07.297 7948.543 - 8001.182: 1.5395% ( 92) 00:16:07.297 8001.182 - 8053.822: 2.4433% ( 118) 00:16:07.297 8053.822 - 8106.461: 3.8526% ( 184) 00:16:07.297 8106.461 - 8159.100: 5.7904% ( 253) 00:16:07.297 8159.100 - 8211.740: 8.4942% ( 353) 00:16:07.297 8211.740 - 8264.379: 11.3128% ( 368) 00:16:07.297 8264.379 - 8317.018: 14.4301% ( 407) 00:16:07.297 8317.018 - 8369.658: 17.4249% ( 391) 00:16:07.297 8369.658 - 8422.297: 20.5959% ( 414) 00:16:07.297 8422.297 - 8474.937: 23.8971% ( 431) 00:16:07.297 8474.937 - 8527.576: 27.2442% ( 437) 00:16:07.297 8527.576 - 8580.215: 30.6832% ( 449) 00:16:07.297 8580.215 - 8632.855: 34.0916% ( 445) 00:16:07.297 8632.855 - 8685.494: 37.6991% ( 471) 00:16:07.297 8685.494 - 8738.133: 41.4445% ( 489) 00:16:07.297 8738.133 - 8790.773: 45.0674% ( 473) 00:16:07.297 8790.773 - 8843.412: 48.5983% ( 461) 00:16:07.297 8843.412 - 8896.051: 52.0067% ( 445) 00:16:07.297 8896.051 - 8948.691: 55.4458% ( 449) 00:16:07.297 8948.691 - 9001.330: 58.5325% ( 403) 00:16:07.297 9001.330 - 9053.969: 61.0830% ( 333) 00:16:07.297 9053.969 - 9106.609: 63.2276% ( 280) 00:16:07.297 9106.609 - 9159.248: 64.9586% ( 226) 00:16:07.297 9159.248 - 9211.888: 66.5058% ( 202) 00:16:07.297 9211.888 - 9264.527: 67.8922% ( 181) 00:16:07.297 9264.527 - 9317.166: 69.1406% ( 163) 00:16:07.297 9317.166 - 9369.806: 70.3585% ( 159) 00:16:07.297 9369.806 - 9422.445: 71.5533% ( 156) 00:16:07.297 9422.445 - 9475.084: 72.6639% ( 145) 00:16:07.297 9475.084 - 9527.724: 73.7362% ( 140) 00:16:07.297 9527.724 - 9580.363: 74.7089% ( 127) 00:16:07.297 9580.363 - 9633.002: 75.5668% ( 112) 00:16:07.297 9633.002 - 9685.642: 76.3174% ( 98) 00:16:07.297 9685.642 - 9738.281: 77.0833% ( 100) 00:16:07.297 9738.281 - 9790.920: 77.7803% ( 91) 00:16:07.297 9790.920 - 9843.560: 78.4467% ( 87) 00:16:07.297 9843.560 - 9896.199: 79.0824% ( 83) 00:16:07.297 9896.199 - 9948.839: 79.5956% ( 67) 00:16:07.297 9948.839 - 10001.478: 80.0322% ( 57) 00:16:07.297 10001.478 - 10054.117: 80.4458% ( 54) 00:16:07.297 10054.117 - 10106.757: 80.8134% ( 48) 00:16:07.297 10106.757 - 10159.396: 81.1121% ( 39) 00:16:07.297 10159.396 - 10212.035: 81.4032% ( 38) 00:16:07.297 10212.035 - 10264.675: 81.7019% ( 39) 00:16:07.297 10264.675 - 10317.314: 82.0083% ( 40) 00:16:07.297 10317.314 - 10369.953: 82.2840% ( 36) 00:16:07.297 10369.953 - 10422.593: 82.5444% ( 34) 00:16:07.297 10422.593 - 10475.232: 82.7742% ( 30) 00:16:07.297 10475.232 - 10527.871: 82.9580% ( 24) 00:16:07.297 10527.871 - 10580.511: 83.1572% ( 26) 00:16:07.297 10580.511 - 10633.150: 83.3487% ( 25) 00:16:07.297 10633.150 - 10685.790: 83.5325% ( 24) 00:16:07.297 10685.790 - 10738.429: 83.6933% ( 21) 00:16:07.297 10738.429 - 10791.068: 83.8006% ( 14) 00:16:07.297 10791.068 - 10843.708: 83.9384% ( 18) 00:16:07.297 10843.708 - 10896.347: 84.0763% ( 18) 00:16:07.297 10896.347 - 10948.986: 84.1988% ( 16) 00:16:07.297 10948.986 - 11001.626: 84.3520% ( 20) 00:16:07.297 11001.626 - 11054.265: 84.5358% ( 24) 00:16:07.297 11054.265 - 11106.904: 84.7197% ( 24) 00:16:07.297 11106.904 - 11159.544: 84.8958% ( 23) 00:16:07.297 11159.544 - 11212.183: 85.1180% ( 29) 00:16:07.297 11212.183 - 11264.822: 85.3324% ( 28) 00:16:07.297 11264.822 - 11317.462: 85.5316% ( 26) 00:16:07.297 11317.462 - 11370.101: 85.7230% ( 25) 00:16:07.297 11370.101 - 11422.741: 85.9375% ( 28) 00:16:07.297 11422.741 - 11475.380: 86.1443% ( 27) 00:16:07.297 11475.380 - 11528.019: 86.3741% ( 30) 00:16:07.297 11528.019 - 11580.659: 86.6115% ( 31) 00:16:07.297 11580.659 - 11633.298: 86.8796% ( 35) 00:16:07.297 11633.298 - 11685.937: 87.1170% ( 31) 00:16:07.297 11685.937 - 11738.577: 87.4004% ( 37) 00:16:07.297 11738.577 - 11791.216: 87.6915% ( 38) 00:16:07.297 11791.216 - 11843.855: 87.9749% ( 37) 00:16:07.297 11843.855 - 11896.495: 88.2736% ( 39) 00:16:07.297 11896.495 - 11949.134: 88.5417% ( 35) 00:16:07.297 11949.134 - 12001.773: 88.8021% ( 34) 00:16:07.297 12001.773 - 12054.413: 89.0625% ( 34) 00:16:07.297 12054.413 - 12107.052: 89.2770% ( 28) 00:16:07.297 12107.052 - 12159.692: 89.4991% ( 29) 00:16:07.297 12159.692 - 12212.331: 89.7212% ( 29) 00:16:07.297 12212.331 - 12264.970: 89.9586% ( 31) 00:16:07.297 12264.970 - 12317.610: 90.1654% ( 27) 00:16:07.297 12317.610 - 12370.249: 90.3952% ( 30) 00:16:07.297 12370.249 - 12422.888: 90.5714% ( 23) 00:16:07.297 12422.888 - 12475.528: 90.7629% ( 25) 00:16:07.297 12475.528 - 12528.167: 90.9544% ( 25) 00:16:07.297 12528.167 - 12580.806: 91.1458% ( 25) 00:16:07.297 12580.806 - 12633.446: 91.3067% ( 21) 00:16:07.297 12633.446 - 12686.085: 91.4522% ( 19) 00:16:07.297 12686.085 - 12738.724: 91.5748% ( 16) 00:16:07.297 12738.724 - 12791.364: 91.6820% ( 14) 00:16:07.297 12791.364 - 12844.003: 91.7892% ( 14) 00:16:07.297 12844.003 - 12896.643: 91.8964% ( 14) 00:16:07.297 12896.643 - 12949.282: 91.9654% ( 9) 00:16:07.297 12949.282 - 13001.921: 92.0113% ( 6) 00:16:07.297 13001.921 - 13054.561: 92.0496% ( 5) 00:16:07.297 13054.561 - 13107.200: 92.0956% ( 6) 00:16:07.297 13107.200 - 13159.839: 92.1339% ( 5) 00:16:07.297 13159.839 - 13212.479: 92.1798% ( 6) 00:16:07.297 13212.479 - 13265.118: 92.2181% ( 5) 00:16:07.297 13265.118 - 13317.757: 92.2564% ( 5) 00:16:07.297 13317.757 - 13370.397: 92.2794% ( 3) 00:16:07.297 13370.397 - 13423.036: 92.3100% ( 4) 00:16:07.297 13423.036 - 13475.676: 92.3483% ( 5) 00:16:07.297 13475.676 - 13580.954: 92.5169% ( 22) 00:16:07.297 13580.954 - 13686.233: 92.6700% ( 20) 00:16:07.297 13686.233 - 13791.512: 92.8385% ( 22) 00:16:07.297 13791.512 - 13896.790: 93.0224% ( 24) 00:16:07.297 13896.790 - 14002.069: 93.2138% ( 25) 00:16:07.297 14002.069 - 14107.348: 93.4053% ( 25) 00:16:07.297 14107.348 - 14212.627: 93.6734% ( 35) 00:16:07.297 14212.627 - 14317.905: 93.9491% ( 36) 00:16:07.297 14317.905 - 14423.184: 94.2096% ( 34) 00:16:07.297 14423.184 - 14528.463: 94.4700% ( 34) 00:16:07.297 14528.463 - 14633.741: 94.6538% ( 24) 00:16:07.297 14633.741 - 14739.020: 94.8376% ( 24) 00:16:07.297 14739.020 - 14844.299: 95.0291% ( 25) 00:16:07.297 14844.299 - 14949.578: 95.1363% ( 14) 00:16:07.297 14949.578 - 15054.856: 95.1976% ( 8) 00:16:07.297 15054.856 - 15160.135: 95.2819% ( 11) 00:16:07.297 15160.135 - 15265.414: 95.4121% ( 17) 00:16:07.297 15265.414 - 15370.692: 95.6036% ( 25) 00:16:07.297 15370.692 - 15475.971: 95.8027% ( 26) 00:16:07.297 15475.971 - 15581.250: 95.9329% ( 17) 00:16:07.297 15581.250 - 15686.529: 96.0631% ( 17) 00:16:07.297 15686.529 - 15791.807: 96.1627% ( 13) 00:16:07.297 15791.807 - 15897.086: 96.2699% ( 14) 00:16:07.297 15897.086 - 16002.365: 96.3771% ( 14) 00:16:07.297 16002.365 - 16107.643: 96.4920% ( 15) 00:16:07.297 16107.643 - 16212.922: 96.6069% ( 15) 00:16:07.297 16212.922 - 16318.201: 96.7065% ( 13) 00:16:07.297 16318.201 - 16423.480: 96.7678% ( 8) 00:16:07.297 16423.480 - 16528.758: 96.8290% ( 8) 00:16:07.297 16528.758 - 16634.037: 96.9133% ( 11) 00:16:07.297 16634.037 - 16739.316: 96.9975% ( 11) 00:16:07.297 16739.316 - 16844.594: 97.1201% ( 16) 00:16:07.297 16844.594 - 16949.873: 97.2197% ( 13) 00:16:07.297 16949.873 - 17055.152: 97.3652% ( 19) 00:16:07.297 17055.152 - 17160.431: 97.5031% ( 18) 00:16:07.297 17160.431 - 17265.709: 97.6333% ( 17) 00:16:07.297 17265.709 - 17370.988: 97.7635% ( 17) 00:16:07.297 17370.988 - 17476.267: 97.8937% ( 17) 00:16:07.297 17476.267 - 17581.545: 98.0009% ( 14) 00:16:07.297 17581.545 - 17686.824: 98.0852% ( 11) 00:16:07.297 17686.824 - 17792.103: 98.1541% ( 9) 00:16:07.297 17792.103 - 17897.382: 98.2230% ( 9) 00:16:07.297 17897.382 - 18002.660: 98.2843% ( 8) 00:16:07.297 18002.660 - 18107.939: 98.3456% ( 8) 00:16:07.297 18107.939 - 18213.218: 98.3992% ( 7) 00:16:07.297 18213.218 - 18318.496: 98.4681% ( 9) 00:16:07.297 18318.496 - 18423.775: 98.5218% ( 7) 00:16:07.297 18423.775 - 18529.054: 98.5294% ( 1) 00:16:07.297 18950.169 - 19055.447: 98.5600% ( 4) 00:16:07.297 19055.447 - 19160.726: 98.5983% ( 5) 00:16:07.297 19160.726 - 19266.005: 98.6290% ( 4) 00:16:07.297 19266.005 - 19371.284: 98.6673% ( 5) 00:16:07.297 19371.284 - 19476.562: 98.7056% ( 5) 00:16:07.297 19476.562 - 19581.841: 98.7362% ( 4) 00:16:07.297 19581.841 - 19687.120: 98.7669% ( 4) 00:16:07.297 19687.120 - 19792.398: 98.7975% ( 4) 00:16:07.297 19792.398 - 19897.677: 98.8358% ( 5) 00:16:07.297 19897.677 - 20002.956: 98.8664% ( 4) 00:16:07.297 20002.956 - 20108.235: 98.8971% ( 4) 00:16:07.297 20108.235 - 20213.513: 98.9354% ( 5) 00:16:07.297 20213.513 - 20318.792: 98.9737% ( 5) 00:16:07.297 20318.792 - 20424.071: 99.0119% ( 5) 00:16:07.297 20424.071 - 20529.349: 99.0196% ( 1) 00:16:07.297 27372.466 - 27583.023: 99.0656% ( 6) 00:16:07.297 27583.023 - 27793.581: 99.1192% ( 7) 00:16:07.297 27793.581 - 28004.138: 99.1651% ( 6) 00:16:07.297 28004.138 - 28214.696: 99.2188% ( 7) 00:16:07.298 28214.696 - 28425.253: 99.2800% ( 8) 00:16:07.298 28425.253 - 28635.810: 99.3260% ( 6) 00:16:07.298 28635.810 - 28846.368: 99.3796% ( 7) 00:16:07.298 28846.368 - 29056.925: 99.4332% ( 7) 00:16:07.298 29056.925 - 29267.483: 99.4792% ( 6) 00:16:07.298 29267.483 - 29478.040: 99.5098% ( 4) 00:16:07.298 34320.861 - 34531.418: 99.5558% ( 6) 00:16:07.298 34531.418 - 34741.976: 99.6017% ( 6) 00:16:07.298 34741.976 - 34952.533: 99.6553% ( 7) 00:16:07.298 34952.533 - 35163.091: 99.7013% ( 6) 00:16:07.298 35163.091 - 35373.648: 99.7472% ( 6) 00:16:07.298 35373.648 - 35584.206: 99.8009% ( 7) 00:16:07.298 35584.206 - 35794.763: 99.8468% ( 6) 00:16:07.298 35794.763 - 36005.320: 99.9004% ( 7) 00:16:07.298 36005.320 - 36215.878: 99.9540% ( 7) 00:16:07.298 36215.878 - 36426.435: 100.0000% ( 6) 00:16:07.298 00:16:07.298 22:57:34 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:16:08.708 Initializing NVMe Controllers 00:16:08.708 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:08.708 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:16:08.708 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:16:08.708 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:16:08.708 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:08.708 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:16:08.708 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:16:08.708 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:16:08.708 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:16:08.708 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:16:08.708 Initialization complete. Launching workers. 00:16:08.708 ======================================================== 00:16:08.708 Latency(us) 00:16:08.708 Device Information : IOPS MiB/s Average min max 00:16:08.708 PCIE (0000:00:10.0) NSID 1 from core 0: 10007.87 117.28 12821.14 8806.20 48008.55 00:16:08.708 PCIE (0000:00:11.0) NSID 1 from core 0: 10007.87 117.28 12796.32 8843.24 46162.56 00:16:08.708 PCIE (0000:00:13.0) NSID 1 from core 0: 10007.87 117.28 12770.94 8751.35 45416.37 00:16:08.708 PCIE (0000:00:12.0) NSID 1 from core 0: 10007.87 117.28 12745.73 8967.61 43177.03 00:16:08.708 PCIE (0000:00:12.0) NSID 2 from core 0: 10007.87 117.28 12720.63 8603.82 41299.64 00:16:08.708 PCIE (0000:00:12.0) NSID 3 from core 0: 10071.61 118.03 12614.87 8773.12 30530.60 00:16:08.708 ======================================================== 00:16:08.708 Total : 60110.96 704.43 12744.80 8603.82 48008.55 00:16:08.708 00:16:08.708 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:08.708 ================================================================================= 00:16:08.708 1.00000% : 9159.248us 00:16:08.708 10.00000% : 10001.478us 00:16:08.708 25.00000% : 11106.904us 00:16:08.708 50.00000% : 12001.773us 00:16:08.708 75.00000% : 13791.512us 00:16:08.708 90.00000% : 16107.643us 00:16:08.708 95.00000% : 17055.152us 00:16:08.708 98.00000% : 19160.726us 00:16:08.708 99.00000% : 34952.533us 00:16:08.708 99.50000% : 46322.635us 00:16:08.708 99.90000% : 47796.537us 00:16:08.708 99.99000% : 48007.094us 00:16:08.708 99.99900% : 48217.651us 00:16:08.708 99.99990% : 48217.651us 00:16:08.708 99.99999% : 48217.651us 00:16:08.708 00:16:08.708 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:16:08.708 ================================================================================= 00:16:08.708 1.00000% : 9211.888us 00:16:08.708 10.00000% : 9896.199us 00:16:08.708 25.00000% : 11159.544us 00:16:08.708 50.00000% : 12107.052us 00:16:08.708 75.00000% : 13686.233us 00:16:08.708 90.00000% : 16212.922us 00:16:08.708 95.00000% : 17160.431us 00:16:08.708 98.00000% : 19160.726us 00:16:08.708 99.00000% : 33057.516us 00:16:08.708 99.50000% : 44638.175us 00:16:08.708 99.90000% : 45901.520us 00:16:08.708 99.99000% : 46322.635us 00:16:08.708 99.99900% : 46322.635us 00:16:08.708 99.99990% : 46322.635us 00:16:08.708 99.99999% : 46322.635us 00:16:08.708 00:16:08.708 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:16:08.708 ================================================================================= 00:16:08.708 1.00000% : 9053.969us 00:16:08.708 10.00000% : 10001.478us 00:16:08.708 25.00000% : 11159.544us 00:16:08.708 50.00000% : 12107.052us 00:16:08.708 75.00000% : 13686.233us 00:16:08.708 90.00000% : 16002.365us 00:16:08.708 95.00000% : 17265.709us 00:16:08.708 98.00000% : 18634.333us 00:16:08.709 99.00000% : 32636.402us 00:16:08.709 99.50000% : 43795.945us 00:16:08.709 99.90000% : 45059.290us 00:16:08.709 99.99000% : 45480.405us 00:16:08.709 99.99900% : 45480.405us 00:16:08.709 99.99990% : 45480.405us 00:16:08.709 99.99999% : 45480.405us 00:16:08.709 00:16:08.709 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:16:08.709 ================================================================================= 00:16:08.709 1.00000% : 9211.888us 00:16:08.709 10.00000% : 10001.478us 00:16:08.709 25.00000% : 11106.904us 00:16:08.709 50.00000% : 12054.413us 00:16:08.709 75.00000% : 13686.233us 00:16:08.709 90.00000% : 15897.086us 00:16:08.709 95.00000% : 17476.267us 00:16:08.709 98.00000% : 18423.775us 00:16:08.709 99.00000% : 31162.500us 00:16:08.709 99.50000% : 41690.371us 00:16:08.709 99.90000% : 42953.716us 00:16:08.709 99.99000% : 43164.273us 00:16:08.709 99.99900% : 43374.831us 00:16:08.709 99.99990% : 43374.831us 00:16:08.709 99.99999% : 43374.831us 00:16:08.709 00:16:08.709 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:16:08.709 ================================================================================= 00:16:08.709 1.00000% : 9159.248us 00:16:08.709 10.00000% : 10001.478us 00:16:08.709 25.00000% : 11159.544us 00:16:08.709 50.00000% : 12054.413us 00:16:08.709 75.00000% : 13791.512us 00:16:08.709 90.00000% : 15791.807us 00:16:08.709 95.00000% : 17265.709us 00:16:08.709 98.00000% : 18318.496us 00:16:08.709 99.00000% : 29688.598us 00:16:08.709 99.50000% : 39795.354us 00:16:08.709 99.90000% : 41058.699us 00:16:08.709 99.99000% : 41269.256us 00:16:08.709 99.99900% : 41479.814us 00:16:08.709 99.99990% : 41479.814us 00:16:08.709 99.99999% : 41479.814us 00:16:08.709 00:16:08.709 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:16:08.709 ================================================================================= 00:16:08.709 1.00000% : 9159.248us 00:16:08.709 10.00000% : 10054.117us 00:16:08.709 25.00000% : 11159.544us 00:16:08.709 50.00000% : 12054.413us 00:16:08.709 75.00000% : 13896.790us 00:16:08.709 90.00000% : 15897.086us 00:16:08.709 95.00000% : 17476.267us 00:16:08.709 98.00000% : 18423.775us 00:16:08.709 99.00000% : 18950.169us 00:16:08.709 99.50000% : 29056.925us 00:16:08.709 99.90000% : 30320.270us 00:16:08.709 99.99000% : 30530.827us 00:16:08.709 99.99900% : 30530.827us 00:16:08.709 99.99990% : 30530.827us 00:16:08.709 99.99999% : 30530.827us 00:16:08.709 00:16:08.709 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:16:08.709 ============================================================================== 00:16:08.709 Range in us Cumulative IO count 00:16:08.709 8790.773 - 8843.412: 0.0299% ( 3) 00:16:08.709 8896.051 - 8948.691: 0.0597% ( 3) 00:16:08.709 8948.691 - 9001.330: 0.1990% ( 14) 00:16:08.709 9001.330 - 9053.969: 0.3782% ( 18) 00:16:08.709 9053.969 - 9106.609: 0.6768% ( 30) 00:16:08.709 9106.609 - 9159.248: 1.0947% ( 42) 00:16:08.709 9159.248 - 9211.888: 1.5127% ( 42) 00:16:08.709 9211.888 - 9264.527: 1.8511% ( 34) 00:16:08.709 9264.527 - 9317.166: 2.2094% ( 36) 00:16:08.709 9317.166 - 9369.806: 2.6274% ( 42) 00:16:08.709 9369.806 - 9422.445: 3.2643% ( 64) 00:16:08.709 9422.445 - 9475.084: 3.7520% ( 49) 00:16:08.709 9475.084 - 9527.724: 4.3193% ( 57) 00:16:08.709 9527.724 - 9580.363: 4.8666% ( 55) 00:16:08.709 9580.363 - 9633.002: 5.6131% ( 75) 00:16:08.709 9633.002 - 9685.642: 6.1505% ( 54) 00:16:08.709 9685.642 - 9738.281: 6.7277% ( 58) 00:16:08.709 9738.281 - 9790.920: 7.3746% ( 65) 00:16:08.709 9790.920 - 9843.560: 8.1409% ( 77) 00:16:08.709 9843.560 - 9896.199: 9.0267% ( 89) 00:16:08.709 9896.199 - 9948.839: 9.8925% ( 87) 00:16:08.709 9948.839 - 10001.478: 11.0171% ( 113) 00:16:08.709 10001.478 - 10054.117: 12.0621% ( 105) 00:16:08.709 10054.117 - 10106.757: 12.6194% ( 56) 00:16:08.709 10106.757 - 10159.396: 13.0673% ( 45) 00:16:08.709 10159.396 - 10212.035: 13.6146% ( 55) 00:16:08.709 10212.035 - 10264.675: 14.5701% ( 96) 00:16:08.709 10264.675 - 10317.314: 15.3762% ( 81) 00:16:08.709 10317.314 - 10369.953: 15.9335% ( 56) 00:16:08.709 10369.953 - 10422.593: 16.6700% ( 74) 00:16:08.709 10422.593 - 10475.232: 17.2174% ( 55) 00:16:08.709 10475.232 - 10527.871: 17.7249% ( 51) 00:16:08.709 10527.871 - 10580.511: 18.1628% ( 44) 00:16:08.709 10580.511 - 10633.150: 18.5311% ( 37) 00:16:08.709 10633.150 - 10685.790: 18.8893% ( 36) 00:16:08.709 10685.790 - 10738.429: 19.1779% ( 29) 00:16:08.709 10738.429 - 10791.068: 19.5760% ( 40) 00:16:08.709 10791.068 - 10843.708: 19.9642% ( 39) 00:16:08.709 10843.708 - 10896.347: 20.7803% ( 82) 00:16:08.709 10896.347 - 10948.986: 21.8153% ( 104) 00:16:08.709 10948.986 - 11001.626: 23.2186% ( 141) 00:16:08.709 11001.626 - 11054.265: 24.6716% ( 146) 00:16:08.709 11054.265 - 11106.904: 26.8014% ( 214) 00:16:08.709 11106.904 - 11159.544: 28.8714% ( 208) 00:16:08.709 11159.544 - 11212.183: 30.2747% ( 141) 00:16:08.709 11212.183 - 11264.822: 31.4490% ( 118) 00:16:08.709 11264.822 - 11317.462: 32.5736% ( 113) 00:16:08.709 11317.462 - 11370.101: 33.7480% ( 118) 00:16:08.709 11370.101 - 11422.741: 34.9622% ( 122) 00:16:08.709 11422.741 - 11475.380: 36.2560% ( 130) 00:16:08.709 11475.380 - 11528.019: 37.6592% ( 141) 00:16:08.709 11528.019 - 11580.659: 38.7540% ( 110) 00:16:08.709 11580.659 - 11633.298: 40.1373% ( 139) 00:16:08.709 11633.298 - 11685.937: 41.6103% ( 148) 00:16:08.709 11685.937 - 11738.577: 43.1628% ( 156) 00:16:08.709 11738.577 - 11791.216: 44.4467% ( 129) 00:16:08.709 11791.216 - 11843.855: 46.1883% ( 175) 00:16:08.709 11843.855 - 11896.495: 47.8205% ( 164) 00:16:08.709 11896.495 - 11949.134: 49.3531% ( 154) 00:16:08.709 11949.134 - 12001.773: 50.8459% ( 150) 00:16:08.709 12001.773 - 12054.413: 52.3487% ( 151) 00:16:08.709 12054.413 - 12107.052: 53.8515% ( 151) 00:16:08.709 12107.052 - 12159.692: 55.4638% ( 162) 00:16:08.709 12159.692 - 12212.331: 56.9268% ( 147) 00:16:08.709 12212.331 - 12264.970: 58.1011% ( 118) 00:16:08.709 12264.970 - 12317.610: 59.0764% ( 98) 00:16:08.709 12317.610 - 12370.249: 60.0717% ( 100) 00:16:08.709 12370.249 - 12422.888: 61.1166% ( 105) 00:16:08.709 12422.888 - 12475.528: 62.3010% ( 119) 00:16:08.709 12475.528 - 12528.167: 63.3161% ( 102) 00:16:08.709 12528.167 - 12580.806: 64.3511% ( 104) 00:16:08.709 12580.806 - 12633.446: 65.1373% ( 79) 00:16:08.709 12633.446 - 12686.085: 66.1127% ( 98) 00:16:08.709 12686.085 - 12738.724: 67.0581% ( 95) 00:16:08.709 12738.724 - 12791.364: 67.8443% ( 79) 00:16:08.709 12791.364 - 12844.003: 68.4514% ( 61) 00:16:08.709 12844.003 - 12896.643: 69.0585% ( 61) 00:16:08.709 12896.643 - 12949.282: 69.6357% ( 58) 00:16:08.709 12949.282 - 13001.921: 70.0637% ( 43) 00:16:08.709 13001.921 - 13054.561: 70.4817% ( 42) 00:16:08.709 13054.561 - 13107.200: 70.8400% ( 36) 00:16:08.709 13107.200 - 13159.839: 71.1883% ( 35) 00:16:08.709 13159.839 - 13212.479: 71.3973% ( 21) 00:16:08.709 13212.479 - 13265.118: 71.6660% ( 27) 00:16:08.709 13265.118 - 13317.757: 71.8451% ( 18) 00:16:08.709 13317.757 - 13370.397: 72.3229% ( 48) 00:16:08.709 13370.397 - 13423.036: 72.7508% ( 43) 00:16:08.709 13423.036 - 13475.676: 73.0792% ( 33) 00:16:08.709 13475.676 - 13580.954: 73.8654% ( 79) 00:16:08.709 13580.954 - 13686.233: 74.7313% ( 87) 00:16:08.709 13686.233 - 13791.512: 75.6768% ( 95) 00:16:08.709 13791.512 - 13896.790: 76.4431% ( 77) 00:16:08.709 13896.790 - 14002.069: 77.1298% ( 69) 00:16:08.709 14002.069 - 14107.348: 77.9359% ( 81) 00:16:08.709 14107.348 - 14212.627: 78.7122% ( 78) 00:16:08.709 14212.627 - 14317.905: 79.5084% ( 80) 00:16:08.709 14317.905 - 14423.184: 80.3643% ( 86) 00:16:08.709 14423.184 - 14528.463: 80.9912% ( 63) 00:16:08.709 14528.463 - 14633.741: 81.4490% ( 46) 00:16:08.709 14633.741 - 14739.020: 82.1059% ( 66) 00:16:08.709 14739.020 - 14844.299: 82.5936% ( 49) 00:16:08.709 14844.299 - 14949.578: 83.1111% ( 52) 00:16:08.709 14949.578 - 15054.856: 83.7580% ( 65) 00:16:08.709 15054.856 - 15160.135: 84.2854% ( 53) 00:16:08.709 15160.135 - 15265.414: 84.9522% ( 67) 00:16:08.709 15265.414 - 15370.692: 85.5593% ( 61) 00:16:08.709 15370.692 - 15475.971: 86.1863% ( 63) 00:16:08.709 15475.971 - 15581.250: 86.7237% ( 54) 00:16:08.709 15581.250 - 15686.529: 87.2811% ( 56) 00:16:08.709 15686.529 - 15791.807: 87.7986% ( 52) 00:16:08.709 15791.807 - 15897.086: 88.4753% ( 68) 00:16:08.709 15897.086 - 16002.365: 89.2715% ( 80) 00:16:08.709 16002.365 - 16107.643: 90.1174% ( 85) 00:16:08.709 16107.643 - 16212.922: 90.9335% ( 82) 00:16:08.709 16212.922 - 16318.201: 91.5307% ( 60) 00:16:08.709 16318.201 - 16423.480: 92.1377% ( 61) 00:16:08.709 16423.480 - 16528.758: 92.8045% ( 67) 00:16:08.709 16528.758 - 16634.037: 93.3718% ( 57) 00:16:08.709 16634.037 - 16739.316: 93.9590% ( 59) 00:16:08.709 16739.316 - 16844.594: 94.3471% ( 39) 00:16:08.709 16844.594 - 16949.873: 94.7552% ( 41) 00:16:08.709 16949.873 - 17055.152: 95.0736% ( 32) 00:16:08.709 17055.152 - 17160.431: 95.3225% ( 25) 00:16:08.709 17160.431 - 17265.709: 95.5812% ( 26) 00:16:08.709 17265.709 - 17370.988: 95.7504% ( 17) 00:16:08.709 17370.988 - 17476.267: 95.8201% ( 7) 00:16:08.709 17476.267 - 17581.545: 95.9196% ( 10) 00:16:08.709 17581.545 - 17686.824: 96.0191% ( 10) 00:16:08.709 17686.824 - 17792.103: 96.1087% ( 9) 00:16:08.709 17792.103 - 17897.382: 96.1684% ( 6) 00:16:08.709 17897.382 - 18002.660: 96.1783% ( 1) 00:16:08.709 18002.660 - 18107.939: 96.2779% ( 10) 00:16:08.709 18107.939 - 18213.218: 96.4769% ( 20) 00:16:08.709 18213.218 - 18318.496: 96.6361% ( 16) 00:16:08.709 18318.496 - 18423.775: 96.7854% ( 15) 00:16:08.709 18423.775 - 18529.054: 97.0143% ( 23) 00:16:08.709 18529.054 - 18634.333: 97.2532% ( 24) 00:16:08.709 18634.333 - 18739.611: 97.5119% ( 26) 00:16:08.709 18739.611 - 18844.890: 97.7508% ( 24) 00:16:08.709 18844.890 - 18950.169: 97.8901% ( 14) 00:16:08.709 18950.169 - 19055.447: 97.9996% ( 11) 00:16:08.710 19055.447 - 19160.726: 98.1787% ( 18) 00:16:08.710 19160.726 - 19266.005: 98.3977% ( 22) 00:16:08.710 19266.005 - 19371.284: 98.5271% ( 13) 00:16:08.710 19371.284 - 19476.562: 98.5967% ( 7) 00:16:08.710 19792.398 - 19897.677: 98.6166% ( 2) 00:16:08.710 19897.677 - 20002.956: 98.6564% ( 4) 00:16:08.710 20002.956 - 20108.235: 98.7062% ( 5) 00:16:08.710 20108.235 - 20213.513: 98.7261% ( 2) 00:16:08.710 34110.304 - 34320.861: 98.7560% ( 3) 00:16:08.710 34320.861 - 34531.418: 98.8157% ( 6) 00:16:08.710 34531.418 - 34741.976: 98.9053% ( 9) 00:16:08.710 34741.976 - 34952.533: 99.0346% ( 13) 00:16:08.710 34952.533 - 35163.091: 99.0943% ( 6) 00:16:08.710 35163.091 - 35373.648: 99.1541% ( 6) 00:16:08.710 35373.648 - 35584.206: 99.2237% ( 7) 00:16:08.710 35584.206 - 35794.763: 99.2436% ( 2) 00:16:08.710 35794.763 - 36005.320: 99.3033% ( 6) 00:16:08.710 36005.320 - 36215.878: 99.3631% ( 6) 00:16:08.710 45690.962 - 45901.520: 99.4029% ( 4) 00:16:08.710 45901.520 - 46112.077: 99.4626% ( 6) 00:16:08.710 46112.077 - 46322.635: 99.5223% ( 6) 00:16:08.710 46322.635 - 46533.192: 99.5820% ( 6) 00:16:08.710 46533.192 - 46743.749: 99.6517% ( 7) 00:16:08.710 46743.749 - 46954.307: 99.7014% ( 5) 00:16:08.710 46954.307 - 47164.864: 99.7711% ( 7) 00:16:08.710 47164.864 - 47375.422: 99.8308% ( 6) 00:16:08.710 47375.422 - 47585.979: 99.8905% ( 6) 00:16:08.710 47585.979 - 47796.537: 99.9502% ( 6) 00:16:08.710 47796.537 - 48007.094: 99.9900% ( 4) 00:16:08.710 48007.094 - 48217.651: 100.0000% ( 1) 00:16:08.710 00:16:08.710 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:16:08.710 ============================================================================== 00:16:08.710 Range in us Cumulative IO count 00:16:08.710 8790.773 - 8843.412: 0.0100% ( 1) 00:16:08.710 8948.691 - 9001.330: 0.0299% ( 2) 00:16:08.710 9001.330 - 9053.969: 0.0697% ( 4) 00:16:08.710 9053.969 - 9106.609: 0.2488% ( 18) 00:16:08.710 9106.609 - 9159.248: 0.7464% ( 50) 00:16:08.710 9159.248 - 9211.888: 1.0450% ( 30) 00:16:08.710 9211.888 - 9264.527: 1.3635% ( 32) 00:16:08.710 9264.527 - 9317.166: 1.8014% ( 44) 00:16:08.710 9317.166 - 9369.806: 2.3885% ( 59) 00:16:08.710 9369.806 - 9422.445: 2.8563% ( 47) 00:16:08.710 9422.445 - 9475.084: 3.2842% ( 43) 00:16:08.710 9475.084 - 9527.724: 3.8316% ( 55) 00:16:08.710 9527.724 - 9580.363: 4.4785% ( 65) 00:16:08.710 9580.363 - 9633.002: 4.9861% ( 51) 00:16:08.710 9633.002 - 9685.642: 5.7325% ( 75) 00:16:08.710 9685.642 - 9738.281: 6.5187% ( 79) 00:16:08.710 9738.281 - 9790.920: 7.5239% ( 101) 00:16:08.710 9790.920 - 9843.560: 8.8774% ( 136) 00:16:08.710 9843.560 - 9896.199: 10.0617% ( 119) 00:16:08.710 9896.199 - 9948.839: 11.0171% ( 96) 00:16:08.710 9948.839 - 10001.478: 12.0521% ( 104) 00:16:08.710 10001.478 - 10054.117: 12.7488% ( 70) 00:16:08.710 10054.117 - 10106.757: 13.6943% ( 95) 00:16:08.710 10106.757 - 10159.396: 14.6895% ( 100) 00:16:08.710 10159.396 - 10212.035: 15.4160% ( 73) 00:16:08.710 10212.035 - 10264.675: 16.1724% ( 76) 00:16:08.710 10264.675 - 10317.314: 16.5904% ( 42) 00:16:08.710 10317.314 - 10369.953: 17.0979% ( 51) 00:16:08.710 10369.953 - 10422.593: 17.5060% ( 41) 00:16:08.710 10422.593 - 10475.232: 17.8842% ( 38) 00:16:08.710 10475.232 - 10527.871: 18.1230% ( 24) 00:16:08.710 10527.871 - 10580.511: 18.3519% ( 23) 00:16:08.710 10580.511 - 10633.150: 18.6107% ( 26) 00:16:08.710 10633.150 - 10685.790: 18.9490% ( 34) 00:16:08.710 10685.790 - 10738.429: 19.3372% ( 39) 00:16:08.710 10738.429 - 10791.068: 19.6158% ( 28) 00:16:08.710 10791.068 - 10843.708: 19.8348% ( 22) 00:16:08.710 10843.708 - 10896.347: 20.1632% ( 33) 00:16:08.710 10896.347 - 10948.986: 20.8599% ( 70) 00:16:08.710 10948.986 - 11001.626: 21.6063% ( 75) 00:16:08.710 11001.626 - 11054.265: 22.7408% ( 114) 00:16:08.710 11054.265 - 11106.904: 24.3730% ( 164) 00:16:08.710 11106.904 - 11159.544: 26.1843% ( 182) 00:16:08.710 11159.544 - 11212.183: 28.1350% ( 196) 00:16:08.710 11212.183 - 11264.822: 30.0259% ( 190) 00:16:08.710 11264.822 - 11317.462: 31.9666% ( 195) 00:16:08.710 11317.462 - 11370.101: 33.7878% ( 183) 00:16:08.710 11370.101 - 11422.741: 35.1612% ( 138) 00:16:08.710 11422.741 - 11475.380: 36.3654% ( 121) 00:16:08.710 11475.380 - 11528.019: 37.3806% ( 102) 00:16:08.710 11528.019 - 11580.659: 38.2962% ( 92) 00:16:08.710 11580.659 - 11633.298: 39.3113% ( 102) 00:16:08.710 11633.298 - 11685.937: 40.2667% ( 96) 00:16:08.710 11685.937 - 11738.577: 41.4311% ( 117) 00:16:08.710 11738.577 - 11791.216: 42.5159% ( 109) 00:16:08.710 11791.216 - 11843.855: 43.6306% ( 112) 00:16:08.710 11843.855 - 11896.495: 44.9841% ( 136) 00:16:08.710 11896.495 - 11949.134: 46.3475% ( 137) 00:16:08.710 11949.134 - 12001.773: 47.8603% ( 152) 00:16:08.710 12001.773 - 12054.413: 49.9303% ( 208) 00:16:08.710 12054.413 - 12107.052: 51.9208% ( 200) 00:16:08.710 12107.052 - 12159.692: 54.0904% ( 218) 00:16:08.710 12159.692 - 12212.331: 55.9116% ( 183) 00:16:08.710 12212.331 - 12264.970: 57.9717% ( 207) 00:16:08.710 12264.970 - 12317.610: 59.6935% ( 173) 00:16:08.710 12317.610 - 12370.249: 61.6441% ( 196) 00:16:08.710 12370.249 - 12422.888: 63.0175% ( 138) 00:16:08.710 12422.888 - 12475.528: 64.2118% ( 120) 00:16:08.710 12475.528 - 12528.167: 65.5852% ( 138) 00:16:08.710 12528.167 - 12580.806: 66.4809% ( 90) 00:16:08.710 12580.806 - 12633.446: 67.3467% ( 87) 00:16:08.710 12633.446 - 12686.085: 68.0434% ( 70) 00:16:08.710 12686.085 - 12738.724: 68.6107% ( 57) 00:16:08.710 12738.724 - 12791.364: 69.1580% ( 55) 00:16:08.710 12791.364 - 12844.003: 69.5163% ( 36) 00:16:08.710 12844.003 - 12896.643: 69.8746% ( 36) 00:16:08.710 12896.643 - 12949.282: 70.1831% ( 31) 00:16:08.710 12949.282 - 13001.921: 70.4916% ( 31) 00:16:08.710 13001.921 - 13054.561: 70.7604% ( 27) 00:16:08.710 13054.561 - 13107.200: 70.9992% ( 24) 00:16:08.710 13107.200 - 13159.839: 71.2679% ( 27) 00:16:08.710 13159.839 - 13212.479: 71.4869% ( 22) 00:16:08.710 13212.479 - 13265.118: 71.9646% ( 48) 00:16:08.710 13265.118 - 13317.757: 72.5717% ( 61) 00:16:08.710 13317.757 - 13370.397: 73.0693% ( 50) 00:16:08.710 13370.397 - 13423.036: 73.5569% ( 49) 00:16:08.710 13423.036 - 13475.676: 73.9948% ( 44) 00:16:08.710 13475.676 - 13580.954: 74.7412% ( 75) 00:16:08.710 13580.954 - 13686.233: 75.6071% ( 87) 00:16:08.710 13686.233 - 13791.512: 76.3236% ( 72) 00:16:08.710 13791.512 - 13896.790: 76.9705% ( 65) 00:16:08.710 13896.790 - 14002.069: 77.7866% ( 82) 00:16:08.710 14002.069 - 14107.348: 78.5032% ( 72) 00:16:08.710 14107.348 - 14212.627: 79.1998% ( 70) 00:16:08.710 14212.627 - 14317.905: 79.7870% ( 59) 00:16:08.710 14317.905 - 14423.184: 80.2747% ( 49) 00:16:08.710 14423.184 - 14528.463: 80.7424% ( 47) 00:16:08.710 14528.463 - 14633.741: 81.1007% ( 36) 00:16:08.710 14633.741 - 14739.020: 81.5088% ( 41) 00:16:08.710 14739.020 - 14844.299: 82.1656% ( 66) 00:16:08.710 14844.299 - 14949.578: 82.7030% ( 54) 00:16:08.710 14949.578 - 15054.856: 83.5291% ( 83) 00:16:08.710 15054.856 - 15160.135: 84.4646% ( 94) 00:16:08.710 15160.135 - 15265.414: 85.5096% ( 105) 00:16:08.710 15265.414 - 15370.692: 86.3256% ( 82) 00:16:08.710 15370.692 - 15475.971: 86.9128% ( 59) 00:16:08.710 15475.971 - 15581.250: 87.3109% ( 40) 00:16:08.710 15581.250 - 15686.529: 87.6592% ( 35) 00:16:08.710 15686.529 - 15791.807: 88.2166% ( 56) 00:16:08.710 15791.807 - 15897.086: 88.7142% ( 50) 00:16:08.710 15897.086 - 16002.365: 89.2516% ( 54) 00:16:08.710 16002.365 - 16107.643: 89.7592% ( 51) 00:16:08.710 16107.643 - 16212.922: 90.3861% ( 63) 00:16:08.710 16212.922 - 16318.201: 91.0131% ( 63) 00:16:08.710 16318.201 - 16423.480: 91.4809% ( 47) 00:16:08.710 16423.480 - 16528.758: 91.9188% ( 44) 00:16:08.710 16528.758 - 16634.037: 92.3666% ( 45) 00:16:08.710 16634.037 - 16739.316: 92.8643% ( 50) 00:16:08.710 16739.316 - 16844.594: 93.4713% ( 61) 00:16:08.710 16844.594 - 16949.873: 94.2576% ( 79) 00:16:08.710 16949.873 - 17055.152: 94.9841% ( 73) 00:16:08.710 17055.152 - 17160.431: 95.6509% ( 67) 00:16:08.710 17160.431 - 17265.709: 95.9196% ( 27) 00:16:08.710 17265.709 - 17370.988: 96.0390% ( 12) 00:16:08.710 17370.988 - 17476.267: 96.1783% ( 14) 00:16:08.710 17476.267 - 17581.545: 96.2082% ( 3) 00:16:08.710 17581.545 - 17686.824: 96.3077% ( 10) 00:16:08.710 17686.824 - 17792.103: 96.3873% ( 8) 00:16:08.710 17792.103 - 17897.382: 96.4271% ( 4) 00:16:08.710 17897.382 - 18002.660: 96.5366% ( 11) 00:16:08.710 18002.660 - 18107.939: 96.6859% ( 15) 00:16:08.710 18107.939 - 18213.218: 96.8352% ( 15) 00:16:08.710 18213.218 - 18318.496: 96.9845% ( 15) 00:16:08.710 18318.496 - 18423.775: 97.0740% ( 9) 00:16:08.710 18423.775 - 18529.054: 97.1636% ( 9) 00:16:08.710 18529.054 - 18634.333: 97.2731% ( 11) 00:16:08.710 18634.333 - 18739.611: 97.4124% ( 14) 00:16:08.710 18739.611 - 18844.890: 97.5717% ( 16) 00:16:08.710 18844.890 - 18950.169: 97.7309% ( 16) 00:16:08.710 18950.169 - 19055.447: 97.9001% ( 17) 00:16:08.710 19055.447 - 19160.726: 98.0693% ( 17) 00:16:08.710 19160.726 - 19266.005: 98.1887% ( 12) 00:16:08.710 19266.005 - 19371.284: 98.2982% ( 11) 00:16:08.710 19371.284 - 19476.562: 98.4076% ( 11) 00:16:08.710 19476.562 - 19581.841: 98.4674% ( 6) 00:16:08.710 19581.841 - 19687.120: 98.5370% ( 7) 00:16:08.710 19687.120 - 19792.398: 98.6166% ( 8) 00:16:08.710 19792.398 - 19897.677: 98.6465% ( 3) 00:16:08.710 19897.677 - 20002.956: 98.6764% ( 3) 00:16:08.710 20002.956 - 20108.235: 98.7062% ( 3) 00:16:08.710 20108.235 - 20213.513: 98.7261% ( 2) 00:16:08.710 32004.729 - 32215.287: 98.7659% ( 4) 00:16:08.710 32215.287 - 32425.844: 98.8356% ( 7) 00:16:08.710 32425.844 - 32636.402: 98.9053% ( 7) 00:16:08.710 32636.402 - 32846.959: 98.9749% ( 7) 00:16:08.710 32846.959 - 33057.516: 99.0346% ( 6) 00:16:08.710 33057.516 - 33268.074: 99.1043% ( 7) 00:16:08.710 33268.074 - 33478.631: 99.1640% ( 6) 00:16:08.710 33478.631 - 33689.189: 99.2337% ( 7) 00:16:08.710 33689.189 - 33899.746: 99.2934% ( 6) 00:16:08.710 33899.746 - 34110.304: 99.3631% ( 7) 00:16:08.711 44006.503 - 44217.060: 99.4128% ( 5) 00:16:08.711 44217.060 - 44427.618: 99.4725% ( 6) 00:16:08.711 44427.618 - 44638.175: 99.5322% ( 6) 00:16:08.711 44638.175 - 44848.733: 99.6019% ( 7) 00:16:08.711 44848.733 - 45059.290: 99.6616% ( 6) 00:16:08.711 45059.290 - 45269.847: 99.7213% ( 6) 00:16:08.711 45269.847 - 45480.405: 99.7910% ( 7) 00:16:08.711 45480.405 - 45690.962: 99.8507% ( 6) 00:16:08.711 45690.962 - 45901.520: 99.9104% ( 6) 00:16:08.711 45901.520 - 46112.077: 99.9801% ( 7) 00:16:08.711 46112.077 - 46322.635: 100.0000% ( 2) 00:16:08.711 00:16:08.711 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:16:08.711 ============================================================================== 00:16:08.711 Range in us Cumulative IO count 00:16:08.711 8738.133 - 8790.773: 0.0697% ( 7) 00:16:08.711 8790.773 - 8843.412: 0.1592% ( 9) 00:16:08.711 8843.412 - 8896.051: 0.3384% ( 18) 00:16:08.711 8896.051 - 8948.691: 0.6270% ( 29) 00:16:08.711 8948.691 - 9001.330: 0.8260% ( 20) 00:16:08.711 9001.330 - 9053.969: 1.1744% ( 35) 00:16:08.711 9053.969 - 9106.609: 1.4928% ( 32) 00:16:08.711 9106.609 - 9159.248: 1.8014% ( 31) 00:16:08.711 9159.248 - 9211.888: 2.2293% ( 43) 00:16:08.711 9211.888 - 9264.527: 2.4482% ( 22) 00:16:08.711 9264.527 - 9317.166: 2.7468% ( 30) 00:16:08.711 9317.166 - 9369.806: 3.0155% ( 27) 00:16:08.711 9369.806 - 9422.445: 3.3041% ( 29) 00:16:08.711 9422.445 - 9475.084: 3.6525% ( 35) 00:16:08.711 9475.084 - 9527.724: 4.1202% ( 47) 00:16:08.711 9527.724 - 9580.363: 4.5880% ( 47) 00:16:08.711 9580.363 - 9633.002: 5.1752% ( 59) 00:16:08.711 9633.002 - 9685.642: 5.8320% ( 66) 00:16:08.711 9685.642 - 9738.281: 6.4789% ( 65) 00:16:08.711 9738.281 - 9790.920: 7.2353% ( 76) 00:16:08.711 9790.920 - 9843.560: 8.0314% ( 80) 00:16:08.711 9843.560 - 9896.199: 8.6982% ( 67) 00:16:08.711 9896.199 - 9948.839: 9.3551% ( 66) 00:16:08.711 9948.839 - 10001.478: 10.2110% ( 86) 00:16:08.711 10001.478 - 10054.117: 10.8579% ( 65) 00:16:08.711 10054.117 - 10106.757: 11.4550% ( 60) 00:16:08.711 10106.757 - 10159.396: 12.2910% ( 84) 00:16:08.711 10159.396 - 10212.035: 12.9777% ( 69) 00:16:08.711 10212.035 - 10264.675: 13.7639% ( 79) 00:16:08.711 10264.675 - 10317.314: 14.5800% ( 82) 00:16:08.711 10317.314 - 10369.953: 15.3961% ( 82) 00:16:08.711 10369.953 - 10422.593: 16.1724% ( 78) 00:16:08.711 10422.593 - 10475.232: 17.0681% ( 90) 00:16:08.711 10475.232 - 10527.871: 17.7249% ( 66) 00:16:08.711 10527.871 - 10580.511: 18.2822% ( 56) 00:16:08.711 10580.511 - 10633.150: 18.8495% ( 57) 00:16:08.711 10633.150 - 10685.790: 19.1282% ( 28) 00:16:08.711 10685.790 - 10738.429: 19.4268% ( 30) 00:16:08.711 10738.429 - 10791.068: 19.7154% ( 29) 00:16:08.711 10791.068 - 10843.708: 19.9940% ( 28) 00:16:08.711 10843.708 - 10896.347: 20.4717% ( 48) 00:16:08.711 10896.347 - 10948.986: 21.0689% ( 60) 00:16:08.711 10948.986 - 11001.626: 22.0541% ( 99) 00:16:08.711 11001.626 - 11054.265: 23.2484% ( 120) 00:16:08.711 11054.265 - 11106.904: 24.7512% ( 151) 00:16:08.711 11106.904 - 11159.544: 26.8412% ( 210) 00:16:08.711 11159.544 - 11212.183: 29.3889% ( 256) 00:16:08.711 11212.183 - 11264.822: 31.3993% ( 202) 00:16:08.711 11264.822 - 11317.462: 33.0713% ( 168) 00:16:08.711 11317.462 - 11370.101: 34.6338% ( 157) 00:16:08.711 11370.101 - 11422.741: 35.7683% ( 114) 00:16:08.711 11422.741 - 11475.380: 36.7735% ( 101) 00:16:08.711 11475.380 - 11528.019: 37.9180% ( 115) 00:16:08.711 11528.019 - 11580.659: 38.8436% ( 93) 00:16:08.711 11580.659 - 11633.298: 39.5900% ( 75) 00:16:08.711 11633.298 - 11685.937: 40.4061% ( 82) 00:16:08.711 11685.937 - 11738.577: 41.1724% ( 77) 00:16:08.711 11738.577 - 11791.216: 42.4164% ( 125) 00:16:08.711 11791.216 - 11843.855: 43.5908% ( 118) 00:16:08.711 11843.855 - 11896.495: 44.8348% ( 125) 00:16:08.711 11896.495 - 11949.134: 46.3774% ( 155) 00:16:08.711 11949.134 - 12001.773: 48.0295% ( 166) 00:16:08.711 12001.773 - 12054.413: 49.9900% ( 197) 00:16:08.711 12054.413 - 12107.052: 52.0800% ( 210) 00:16:08.711 12107.052 - 12159.692: 54.1600% ( 209) 00:16:08.711 12159.692 - 12212.331: 55.8021% ( 165) 00:16:08.711 12212.331 - 12264.970: 57.3846% ( 159) 00:16:08.711 12264.970 - 12317.610: 58.8674% ( 149) 00:16:08.711 12317.610 - 12370.249: 60.1911% ( 133) 00:16:08.711 12370.249 - 12422.888: 61.4650% ( 128) 00:16:08.711 12422.888 - 12475.528: 62.5995% ( 114) 00:16:08.711 12475.528 - 12528.167: 63.7639% ( 117) 00:16:08.711 12528.167 - 12580.806: 64.6795% ( 92) 00:16:08.711 12580.806 - 12633.446: 65.4658% ( 79) 00:16:08.711 12633.446 - 12686.085: 66.1823% ( 72) 00:16:08.711 12686.085 - 12738.724: 67.0482% ( 87) 00:16:08.711 12738.724 - 12791.364: 67.7150% ( 67) 00:16:08.711 12791.364 - 12844.003: 68.2225% ( 51) 00:16:08.711 12844.003 - 12896.643: 68.7998% ( 58) 00:16:08.711 12896.643 - 12949.282: 69.3073% ( 51) 00:16:08.711 12949.282 - 13001.921: 69.7452% ( 44) 00:16:08.711 13001.921 - 13054.561: 70.1533% ( 41) 00:16:08.711 13054.561 - 13107.200: 70.4916% ( 34) 00:16:08.711 13107.200 - 13159.839: 70.8499% ( 36) 00:16:08.711 13159.839 - 13212.479: 71.2082% ( 36) 00:16:08.711 13212.479 - 13265.118: 71.7357% ( 53) 00:16:08.711 13265.118 - 13317.757: 72.2233% ( 49) 00:16:08.711 13317.757 - 13370.397: 72.7209% ( 50) 00:16:08.711 13370.397 - 13423.036: 73.3280% ( 61) 00:16:08.711 13423.036 - 13475.676: 73.8654% ( 54) 00:16:08.711 13475.676 - 13580.954: 74.8507% ( 99) 00:16:08.711 13580.954 - 13686.233: 75.6867% ( 84) 00:16:08.711 13686.233 - 13791.512: 76.3336% ( 65) 00:16:08.711 13791.512 - 13896.790: 76.9307% ( 60) 00:16:08.711 13896.790 - 14002.069: 77.3985% ( 47) 00:16:08.711 14002.069 - 14107.348: 77.9359% ( 54) 00:16:08.711 14107.348 - 14212.627: 78.4037% ( 47) 00:16:08.711 14212.627 - 14317.905: 79.1202% ( 72) 00:16:08.711 14317.905 - 14423.184: 79.8965% ( 78) 00:16:08.711 14423.184 - 14528.463: 80.8619% ( 97) 00:16:08.711 14528.463 - 14633.741: 81.6680% ( 81) 00:16:08.711 14633.741 - 14739.020: 82.3248% ( 66) 00:16:08.711 14739.020 - 14844.299: 83.4096% ( 109) 00:16:08.711 14844.299 - 14949.578: 84.5342% ( 113) 00:16:08.711 14949.578 - 15054.856: 85.3105% ( 78) 00:16:08.711 15054.856 - 15160.135: 85.9873% ( 68) 00:16:08.711 15160.135 - 15265.414: 86.6043% ( 62) 00:16:08.711 15265.414 - 15370.692: 87.1218% ( 52) 00:16:08.711 15370.692 - 15475.971: 87.8981% ( 78) 00:16:08.711 15475.971 - 15581.250: 88.5251% ( 63) 00:16:08.711 15581.250 - 15686.529: 88.9729% ( 45) 00:16:08.711 15686.529 - 15791.807: 89.3511% ( 38) 00:16:08.711 15791.807 - 15897.086: 89.8686% ( 52) 00:16:08.711 15897.086 - 16002.365: 90.2667% ( 40) 00:16:08.711 16002.365 - 16107.643: 90.7444% ( 48) 00:16:08.711 16107.643 - 16212.922: 91.1525% ( 41) 00:16:08.711 16212.922 - 16318.201: 91.5406% ( 39) 00:16:08.711 16318.201 - 16423.480: 91.8989% ( 36) 00:16:08.711 16423.480 - 16528.758: 92.1975% ( 30) 00:16:08.711 16528.758 - 16634.037: 92.6851% ( 49) 00:16:08.711 16634.037 - 16739.316: 93.1529% ( 47) 00:16:08.711 16739.316 - 16844.594: 93.6405% ( 49) 00:16:08.711 16844.594 - 16949.873: 94.0884% ( 45) 00:16:08.711 16949.873 - 17055.152: 94.4566% ( 37) 00:16:08.711 17055.152 - 17160.431: 94.9343% ( 48) 00:16:08.711 17160.431 - 17265.709: 95.5314% ( 60) 00:16:08.711 17265.709 - 17370.988: 96.0888% ( 56) 00:16:08.711 17370.988 - 17476.267: 96.6859% ( 60) 00:16:08.711 17476.267 - 17581.545: 96.9447% ( 26) 00:16:08.711 17581.545 - 17686.824: 97.0641% ( 12) 00:16:08.711 17686.824 - 17792.103: 97.1835% ( 12) 00:16:08.711 17792.103 - 17897.382: 97.2830% ( 10) 00:16:08.711 17897.382 - 18002.660: 97.3328% ( 5) 00:16:08.711 18002.660 - 18107.939: 97.3925% ( 6) 00:16:08.711 18107.939 - 18213.218: 97.4821% ( 9) 00:16:08.711 18213.218 - 18318.496: 97.6115% ( 13) 00:16:08.711 18318.496 - 18423.775: 97.8105% ( 20) 00:16:08.711 18423.775 - 18529.054: 97.9797% ( 17) 00:16:08.711 18529.054 - 18634.333: 98.0693% ( 9) 00:16:08.711 18634.333 - 18739.611: 98.1389% ( 7) 00:16:08.711 18739.611 - 18844.890: 98.2086% ( 7) 00:16:08.711 18844.890 - 18950.169: 98.2882% ( 8) 00:16:08.711 18950.169 - 19055.447: 98.3678% ( 8) 00:16:08.711 19055.447 - 19160.726: 98.4574% ( 9) 00:16:08.711 19160.726 - 19266.005: 98.5569% ( 10) 00:16:08.711 19266.005 - 19371.284: 98.6465% ( 9) 00:16:08.711 19371.284 - 19476.562: 98.7261% ( 8) 00:16:08.711 31583.614 - 31794.172: 98.7958% ( 7) 00:16:08.711 31794.172 - 32004.729: 98.8654% ( 7) 00:16:08.711 32004.729 - 32215.287: 98.9351% ( 7) 00:16:08.711 32215.287 - 32425.844: 98.9948% ( 6) 00:16:08.711 32425.844 - 32636.402: 99.0645% ( 7) 00:16:08.711 32636.402 - 32846.959: 99.1342% ( 7) 00:16:08.711 32846.959 - 33057.516: 99.1939% ( 6) 00:16:08.711 33057.516 - 33268.074: 99.2536% ( 6) 00:16:08.711 33268.074 - 33478.631: 99.3133% ( 6) 00:16:08.711 33478.631 - 33689.189: 99.3631% ( 5) 00:16:08.711 43164.273 - 43374.831: 99.4228% ( 6) 00:16:08.711 43374.831 - 43585.388: 99.4825% ( 6) 00:16:08.711 43585.388 - 43795.945: 99.5223% ( 4) 00:16:08.711 43795.945 - 44006.503: 99.5820% ( 6) 00:16:08.711 44006.503 - 44217.060: 99.6517% ( 7) 00:16:08.711 44217.060 - 44427.618: 99.7114% ( 6) 00:16:08.711 44427.618 - 44638.175: 99.7711% ( 6) 00:16:08.711 44638.175 - 44848.733: 99.8408% ( 7) 00:16:08.711 44848.733 - 45059.290: 99.9005% ( 6) 00:16:08.711 45059.290 - 45269.847: 99.9502% ( 5) 00:16:08.711 45269.847 - 45480.405: 100.0000% ( 5) 00:16:08.711 00:16:08.711 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:16:08.711 ============================================================================== 00:16:08.711 Range in us Cumulative IO count 00:16:08.711 8948.691 - 9001.330: 0.1095% ( 11) 00:16:08.711 9001.330 - 9053.969: 0.2986% ( 19) 00:16:08.711 9053.969 - 9106.609: 0.5872% ( 29) 00:16:08.711 9106.609 - 9159.248: 0.8957% ( 31) 00:16:08.711 9159.248 - 9211.888: 1.5824% ( 69) 00:16:08.711 9211.888 - 9264.527: 1.9705% ( 39) 00:16:08.711 9264.527 - 9317.166: 2.4582% ( 49) 00:16:08.712 9317.166 - 9369.806: 3.0354% ( 58) 00:16:08.712 9369.806 - 9422.445: 3.4534% ( 42) 00:16:08.712 9422.445 - 9475.084: 3.9013% ( 45) 00:16:08.712 9475.084 - 9527.724: 4.2994% ( 40) 00:16:08.712 9527.724 - 9580.363: 4.5183% ( 22) 00:16:08.712 9580.363 - 9633.002: 4.8666% ( 35) 00:16:08.712 9633.002 - 9685.642: 5.2150% ( 35) 00:16:08.712 9685.642 - 9738.281: 5.7524% ( 54) 00:16:08.712 9738.281 - 9790.920: 6.5088% ( 76) 00:16:08.712 9790.920 - 9843.560: 7.2850% ( 78) 00:16:08.712 9843.560 - 9896.199: 8.2106% ( 93) 00:16:08.712 9896.199 - 9948.839: 9.3352% ( 113) 00:16:08.712 9948.839 - 10001.478: 10.2408% ( 91) 00:16:08.712 10001.478 - 10054.117: 11.5446% ( 131) 00:16:08.712 10054.117 - 10106.757: 12.3109% ( 77) 00:16:08.712 10106.757 - 10159.396: 12.9976% ( 69) 00:16:08.712 10159.396 - 10212.035: 13.5649% ( 57) 00:16:08.712 10212.035 - 10264.675: 13.9729% ( 41) 00:16:08.712 10264.675 - 10317.314: 14.4506% ( 48) 00:16:08.712 10317.314 - 10369.953: 14.9781% ( 53) 00:16:08.712 10369.953 - 10422.593: 15.5951% ( 62) 00:16:08.712 10422.593 - 10475.232: 16.1326% ( 54) 00:16:08.712 10475.232 - 10527.871: 16.7098% ( 58) 00:16:08.712 10527.871 - 10580.511: 17.3467% ( 64) 00:16:08.712 10580.511 - 10633.150: 18.1827% ( 84) 00:16:08.712 10633.150 - 10685.790: 18.9988% ( 82) 00:16:08.712 10685.790 - 10738.429: 19.5362% ( 54) 00:16:08.712 10738.429 - 10791.068: 20.3125% ( 78) 00:16:08.712 10791.068 - 10843.708: 20.8997% ( 59) 00:16:08.712 10843.708 - 10896.347: 21.6162% ( 72) 00:16:08.712 10896.347 - 10948.986: 22.5418% ( 93) 00:16:08.712 10948.986 - 11001.626: 23.4674% ( 93) 00:16:08.712 11001.626 - 11054.265: 24.7910% ( 133) 00:16:08.712 11054.265 - 11106.904: 26.1943% ( 141) 00:16:08.712 11106.904 - 11159.544: 28.0155% ( 183) 00:16:08.712 11159.544 - 11212.183: 29.9264% ( 192) 00:16:08.712 11212.183 - 11264.822: 31.4391% ( 152) 00:16:08.712 11264.822 - 11317.462: 33.1310% ( 170) 00:16:08.712 11317.462 - 11370.101: 34.7233% ( 160) 00:16:08.712 11370.101 - 11422.741: 35.9375% ( 122) 00:16:08.712 11422.741 - 11475.380: 37.0621% ( 113) 00:16:08.712 11475.380 - 11528.019: 37.8881% ( 83) 00:16:08.712 11528.019 - 11580.659: 38.6346% ( 75) 00:16:08.712 11580.659 - 11633.298: 39.3412% ( 71) 00:16:08.712 11633.298 - 11685.937: 40.1473% ( 81) 00:16:08.712 11685.937 - 11738.577: 41.0430% ( 90) 00:16:08.712 11738.577 - 11791.216: 42.1477% ( 111) 00:16:08.712 11791.216 - 11843.855: 43.5709% ( 143) 00:16:08.712 11843.855 - 11896.495: 45.1533% ( 159) 00:16:08.712 11896.495 - 11949.134: 46.9646% ( 182) 00:16:08.712 11949.134 - 12001.773: 48.7858% ( 183) 00:16:08.712 12001.773 - 12054.413: 50.7265% ( 195) 00:16:08.712 12054.413 - 12107.052: 52.6274% ( 191) 00:16:08.712 12107.052 - 12159.692: 54.3889% ( 177) 00:16:08.712 12159.692 - 12212.331: 56.3893% ( 201) 00:16:08.712 12212.331 - 12264.970: 58.1210% ( 174) 00:16:08.712 12264.970 - 12317.610: 59.5840% ( 147) 00:16:08.712 12317.610 - 12370.249: 60.9475% ( 137) 00:16:08.712 12370.249 - 12422.888: 62.1815% ( 124) 00:16:08.712 12422.888 - 12475.528: 63.3061% ( 113) 00:16:08.712 12475.528 - 12528.167: 64.4506% ( 115) 00:16:08.712 12528.167 - 12580.806: 65.5653% ( 112) 00:16:08.712 12580.806 - 12633.446: 66.5506% ( 99) 00:16:08.712 12633.446 - 12686.085: 67.2074% ( 66) 00:16:08.712 12686.085 - 12738.724: 67.6752% ( 47) 00:16:08.712 12738.724 - 12791.364: 68.0932% ( 42) 00:16:08.712 12791.364 - 12844.003: 68.5211% ( 43) 00:16:08.712 12844.003 - 12896.643: 68.9689% ( 45) 00:16:08.712 12896.643 - 12949.282: 69.3372% ( 37) 00:16:08.712 12949.282 - 13001.921: 69.7850% ( 45) 00:16:08.712 13001.921 - 13054.561: 70.3125% ( 53) 00:16:08.712 13054.561 - 13107.200: 71.0191% ( 71) 00:16:08.712 13107.200 - 13159.839: 71.6361% ( 62) 00:16:08.712 13159.839 - 13212.479: 72.1139% ( 48) 00:16:08.712 13212.479 - 13265.118: 72.6911% ( 58) 00:16:08.712 13265.118 - 13317.757: 73.0096% ( 32) 00:16:08.712 13317.757 - 13370.397: 73.3380% ( 33) 00:16:08.712 13370.397 - 13423.036: 73.7958% ( 46) 00:16:08.712 13423.036 - 13475.676: 74.1541% ( 36) 00:16:08.712 13475.676 - 13580.954: 74.7910% ( 64) 00:16:08.712 13580.954 - 13686.233: 75.2986% ( 51) 00:16:08.712 13686.233 - 13791.512: 75.7862% ( 49) 00:16:08.712 13791.512 - 13896.790: 76.1744% ( 39) 00:16:08.712 13896.790 - 14002.069: 76.6521% ( 48) 00:16:08.712 14002.069 - 14107.348: 77.3885% ( 74) 00:16:08.712 14107.348 - 14212.627: 78.2544% ( 87) 00:16:08.712 14212.627 - 14317.905: 78.9908% ( 74) 00:16:08.712 14317.905 - 14423.184: 79.7174% ( 73) 00:16:08.712 14423.184 - 14528.463: 80.5036% ( 79) 00:16:08.712 14528.463 - 14633.741: 81.2201% ( 72) 00:16:08.712 14633.741 - 14739.020: 82.3846% ( 117) 00:16:08.712 14739.020 - 14844.299: 83.4395% ( 106) 00:16:08.712 14844.299 - 14949.578: 84.5939% ( 116) 00:16:08.712 14949.578 - 15054.856: 85.4697% ( 88) 00:16:08.712 15054.856 - 15160.135: 86.3953% ( 93) 00:16:08.712 15160.135 - 15265.414: 87.2512% ( 86) 00:16:08.712 15265.414 - 15370.692: 88.1768% ( 93) 00:16:08.712 15370.692 - 15475.971: 88.6545% ( 48) 00:16:08.712 15475.971 - 15581.250: 89.4009% ( 75) 00:16:08.712 15581.250 - 15686.529: 89.7492% ( 35) 00:16:08.712 15686.529 - 15791.807: 89.9582% ( 21) 00:16:08.712 15791.807 - 15897.086: 90.1672% ( 21) 00:16:08.712 15897.086 - 16002.365: 90.3264% ( 16) 00:16:08.712 16002.365 - 16107.643: 90.4857% ( 16) 00:16:08.712 16107.643 - 16212.922: 90.7046% ( 22) 00:16:08.712 16212.922 - 16318.201: 90.9733% ( 27) 00:16:08.712 16318.201 - 16423.480: 91.4212% ( 45) 00:16:08.712 16423.480 - 16528.758: 91.9686% ( 55) 00:16:08.712 16528.758 - 16634.037: 92.5259% ( 56) 00:16:08.712 16634.037 - 16739.316: 92.9041% ( 38) 00:16:08.712 16739.316 - 16844.594: 93.1728% ( 27) 00:16:08.712 16844.594 - 16949.873: 93.4216% ( 25) 00:16:08.712 16949.873 - 17055.152: 93.6405% ( 22) 00:16:08.712 17055.152 - 17160.431: 93.9092% ( 27) 00:16:08.712 17160.431 - 17265.709: 94.4865% ( 58) 00:16:08.712 17265.709 - 17370.988: 94.8348% ( 35) 00:16:08.712 17370.988 - 17476.267: 95.1533% ( 32) 00:16:08.712 17476.267 - 17581.545: 95.8101% ( 66) 00:16:08.712 17581.545 - 17686.824: 96.2878% ( 48) 00:16:08.712 17686.824 - 17792.103: 96.6162% ( 33) 00:16:08.712 17792.103 - 17897.382: 96.9745% ( 36) 00:16:08.712 17897.382 - 18002.660: 97.2233% ( 25) 00:16:08.712 18002.660 - 18107.939: 97.4821% ( 26) 00:16:08.712 18107.939 - 18213.218: 97.7906% ( 31) 00:16:08.712 18213.218 - 18318.496: 97.9797% ( 19) 00:16:08.712 18318.496 - 18423.775: 98.1688% ( 19) 00:16:08.712 18423.775 - 18529.054: 98.3280% ( 16) 00:16:08.712 18529.054 - 18634.333: 98.4275% ( 10) 00:16:08.712 18634.333 - 18739.611: 98.5271% ( 10) 00:16:08.712 18739.611 - 18844.890: 98.6266% ( 10) 00:16:08.712 18844.890 - 18950.169: 98.7062% ( 8) 00:16:08.712 18950.169 - 19055.447: 98.7261% ( 2) 00:16:08.712 29899.155 - 30109.712: 98.7460% ( 2) 00:16:08.712 30109.712 - 30320.270: 98.8057% ( 6) 00:16:08.712 30320.270 - 30530.827: 98.8654% ( 6) 00:16:08.712 30530.827 - 30741.385: 98.9351% ( 7) 00:16:08.712 30741.385 - 30951.942: 98.9948% ( 6) 00:16:08.712 30951.942 - 31162.500: 99.0645% ( 7) 00:16:08.712 31162.500 - 31373.057: 99.1242% ( 6) 00:16:08.712 31373.057 - 31583.614: 99.1740% ( 5) 00:16:08.712 31583.614 - 31794.172: 99.2436% ( 7) 00:16:08.712 31794.172 - 32004.729: 99.3133% ( 7) 00:16:08.712 32004.729 - 32215.287: 99.3631% ( 5) 00:16:08.712 41058.699 - 41269.256: 99.4128% ( 5) 00:16:08.712 41269.256 - 41479.814: 99.4825% ( 7) 00:16:08.712 41479.814 - 41690.371: 99.5422% ( 6) 00:16:08.712 41690.371 - 41900.929: 99.6019% ( 6) 00:16:08.712 41900.929 - 42111.486: 99.6716% ( 7) 00:16:08.712 42111.486 - 42322.043: 99.7412% ( 7) 00:16:08.712 42322.043 - 42532.601: 99.8010% ( 6) 00:16:08.712 42532.601 - 42743.158: 99.8706% ( 7) 00:16:08.712 42743.158 - 42953.716: 99.9303% ( 6) 00:16:08.712 42953.716 - 43164.273: 99.9900% ( 6) 00:16:08.712 43164.273 - 43374.831: 100.0000% ( 1) 00:16:08.712 00:16:08.712 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:16:08.712 ============================================================================== 00:16:08.712 Range in us Cumulative IO count 00:16:08.712 8580.215 - 8632.855: 0.0100% ( 1) 00:16:08.712 8843.412 - 8896.051: 0.0398% ( 3) 00:16:08.712 8896.051 - 8948.691: 0.0896% ( 5) 00:16:08.712 8948.691 - 9001.330: 0.2189% ( 13) 00:16:08.712 9001.330 - 9053.969: 0.5175% ( 30) 00:16:08.712 9053.969 - 9106.609: 0.7962% ( 28) 00:16:08.712 9106.609 - 9159.248: 1.0748% ( 28) 00:16:08.712 9159.248 - 9211.888: 1.4630% ( 39) 00:16:08.712 9211.888 - 9264.527: 1.9407% ( 48) 00:16:08.712 9264.527 - 9317.166: 2.4881% ( 55) 00:16:08.712 9317.166 - 9369.806: 3.0155% ( 53) 00:16:08.712 9369.806 - 9422.445: 3.5629% ( 55) 00:16:08.712 9422.445 - 9475.084: 4.1998% ( 64) 00:16:08.712 9475.084 - 9527.724: 4.7572% ( 56) 00:16:08.712 9527.724 - 9580.363: 5.2349% ( 48) 00:16:08.712 9580.363 - 9633.002: 5.8121% ( 58) 00:16:08.712 9633.002 - 9685.642: 6.6083% ( 80) 00:16:08.712 9685.642 - 9738.281: 7.1656% ( 56) 00:16:08.712 9738.281 - 9790.920: 7.9419% ( 78) 00:16:08.712 9790.920 - 9843.560: 8.4096% ( 47) 00:16:08.712 9843.560 - 9896.199: 9.0068% ( 60) 00:16:08.712 9896.199 - 9948.839: 9.7930% ( 79) 00:16:08.712 9948.839 - 10001.478: 10.5295% ( 74) 00:16:08.712 10001.478 - 10054.117: 11.7038% ( 118) 00:16:08.712 10054.117 - 10106.757: 12.4701% ( 77) 00:16:08.712 10106.757 - 10159.396: 13.3857% ( 92) 00:16:08.712 10159.396 - 10212.035: 13.9431% ( 56) 00:16:08.712 10212.035 - 10264.675: 14.4705% ( 53) 00:16:08.712 10264.675 - 10317.314: 14.9980% ( 53) 00:16:08.712 10317.314 - 10369.953: 15.6549% ( 66) 00:16:08.712 10369.953 - 10422.593: 16.2918% ( 64) 00:16:08.712 10422.593 - 10475.232: 16.7894% ( 50) 00:16:08.712 10475.232 - 10527.871: 17.2572% ( 47) 00:16:08.712 10527.871 - 10580.511: 17.6851% ( 43) 00:16:08.713 10580.511 - 10633.150: 18.0334% ( 35) 00:16:08.713 10633.150 - 10685.790: 18.5808% ( 55) 00:16:08.713 10685.790 - 10738.429: 18.8794% ( 30) 00:16:08.713 10738.429 - 10791.068: 19.1481% ( 27) 00:16:08.713 10791.068 - 10843.708: 19.4068% ( 26) 00:16:08.713 10843.708 - 10896.347: 19.9244% ( 52) 00:16:08.713 10896.347 - 10948.986: 20.6907% ( 77) 00:16:08.713 10948.986 - 11001.626: 21.5665% ( 88) 00:16:08.713 11001.626 - 11054.265: 22.6115% ( 105) 00:16:08.713 11054.265 - 11106.904: 24.2436% ( 164) 00:16:08.713 11106.904 - 11159.544: 26.3336% ( 210) 00:16:08.713 11159.544 - 11212.183: 28.5231% ( 220) 00:16:08.713 11212.183 - 11264.822: 30.5135% ( 200) 00:16:08.713 11264.822 - 11317.462: 32.4443% ( 194) 00:16:08.713 11317.462 - 11370.101: 34.1162% ( 168) 00:16:08.713 11370.101 - 11422.741: 35.6588% ( 155) 00:16:08.713 11422.741 - 11475.380: 37.1218% ( 147) 00:16:08.713 11475.380 - 11528.019: 38.4654% ( 135) 00:16:08.713 11528.019 - 11580.659: 39.5203% ( 106) 00:16:08.713 11580.659 - 11633.298: 40.3065% ( 79) 00:16:08.713 11633.298 - 11685.937: 41.4311% ( 113) 00:16:08.713 11685.937 - 11738.577: 42.2572% ( 83) 00:16:08.713 11738.577 - 11791.216: 43.1131% ( 86) 00:16:08.713 11791.216 - 11843.855: 44.2078% ( 110) 00:16:08.713 11843.855 - 11896.495: 45.7902% ( 159) 00:16:08.713 11896.495 - 11949.134: 47.3229% ( 154) 00:16:08.713 11949.134 - 12001.773: 48.9550% ( 164) 00:16:08.713 12001.773 - 12054.413: 50.3782% ( 143) 00:16:08.713 12054.413 - 12107.052: 52.0601% ( 169) 00:16:08.713 12107.052 - 12159.692: 53.7818% ( 173) 00:16:08.713 12159.692 - 12212.331: 55.4837% ( 171) 00:16:08.713 12212.331 - 12264.970: 57.2850% ( 181) 00:16:08.713 12264.970 - 12317.610: 58.5291% ( 125) 00:16:08.713 12317.610 - 12370.249: 60.3105% ( 179) 00:16:08.713 12370.249 - 12422.888: 61.9427% ( 164) 00:16:08.713 12422.888 - 12475.528: 63.2663% ( 133) 00:16:08.713 12475.528 - 12528.167: 64.4506% ( 119) 00:16:08.713 12528.167 - 12580.806: 65.3563% ( 91) 00:16:08.713 12580.806 - 12633.446: 66.1027% ( 75) 00:16:08.713 12633.446 - 12686.085: 66.6501% ( 55) 00:16:08.713 12686.085 - 12738.724: 67.1875% ( 54) 00:16:08.713 12738.724 - 12791.364: 67.6851% ( 50) 00:16:08.713 12791.364 - 12844.003: 68.2922% ( 61) 00:16:08.713 12844.003 - 12896.643: 68.9789% ( 69) 00:16:08.713 12896.643 - 12949.282: 69.7253% ( 75) 00:16:08.713 12949.282 - 13001.921: 70.3722% ( 65) 00:16:08.713 13001.921 - 13054.561: 70.9395% ( 57) 00:16:08.713 13054.561 - 13107.200: 71.2878% ( 35) 00:16:08.713 13107.200 - 13159.839: 71.5267% ( 24) 00:16:08.713 13159.839 - 13212.479: 71.8451% ( 32) 00:16:08.713 13212.479 - 13265.118: 72.1935% ( 35) 00:16:08.713 13265.118 - 13317.757: 72.5518% ( 36) 00:16:08.713 13317.757 - 13370.397: 72.8702% ( 32) 00:16:08.713 13370.397 - 13423.036: 73.3181% ( 45) 00:16:08.713 13423.036 - 13475.676: 73.8057% ( 49) 00:16:08.713 13475.676 - 13580.954: 74.4427% ( 64) 00:16:08.713 13580.954 - 13686.233: 74.9104% ( 47) 00:16:08.713 13686.233 - 13791.512: 75.6369% ( 73) 00:16:08.713 13791.512 - 13896.790: 76.2341% ( 60) 00:16:08.713 13896.790 - 14002.069: 76.7118% ( 48) 00:16:08.713 14002.069 - 14107.348: 77.1298% ( 42) 00:16:08.713 14107.348 - 14212.627: 77.7966% ( 67) 00:16:08.713 14212.627 - 14317.905: 78.5032% ( 71) 00:16:08.713 14317.905 - 14423.184: 78.9311% ( 43) 00:16:08.713 14423.184 - 14528.463: 79.5283% ( 60) 00:16:08.713 14528.463 - 14633.741: 80.6131% ( 109) 00:16:08.713 14633.741 - 14739.020: 81.8173% ( 121) 00:16:08.713 14739.020 - 14844.299: 83.3002% ( 149) 00:16:08.713 14844.299 - 14949.578: 84.2655% ( 97) 00:16:08.713 14949.578 - 15054.856: 85.2110% ( 95) 00:16:08.713 15054.856 - 15160.135: 86.1963% ( 99) 00:16:08.713 15160.135 - 15265.414: 87.0422% ( 85) 00:16:08.713 15265.414 - 15370.692: 87.8284% ( 79) 00:16:08.713 15370.692 - 15475.971: 88.6843% ( 86) 00:16:08.713 15475.971 - 15581.250: 89.4108% ( 73) 00:16:08.713 15581.250 - 15686.529: 89.8885% ( 48) 00:16:08.713 15686.529 - 15791.807: 90.3065% ( 42) 00:16:08.713 15791.807 - 15897.086: 90.5852% ( 28) 00:16:08.713 15897.086 - 16002.365: 90.8041% ( 22) 00:16:08.713 16002.365 - 16107.643: 91.2321% ( 43) 00:16:08.713 16107.643 - 16212.922: 91.6401% ( 41) 00:16:08.713 16212.922 - 16318.201: 92.0084% ( 37) 00:16:08.713 16318.201 - 16423.480: 92.3865% ( 38) 00:16:08.713 16423.480 - 16528.758: 92.5557% ( 17) 00:16:08.713 16528.758 - 16634.037: 92.7548% ( 20) 00:16:08.713 16634.037 - 16739.316: 93.2822% ( 53) 00:16:08.713 16739.316 - 16844.594: 93.7799% ( 50) 00:16:08.713 16844.594 - 16949.873: 94.1580% ( 38) 00:16:08.713 16949.873 - 17055.152: 94.5163% ( 36) 00:16:08.713 17055.152 - 17160.431: 94.8248% ( 31) 00:16:08.713 17160.431 - 17265.709: 95.0836% ( 26) 00:16:08.713 17265.709 - 17370.988: 95.3225% ( 24) 00:16:08.713 17370.988 - 17476.267: 95.5713% ( 25) 00:16:08.713 17476.267 - 17581.545: 95.8400% ( 27) 00:16:08.713 17581.545 - 17686.824: 96.2580% ( 42) 00:16:08.713 17686.824 - 17792.103: 96.6660% ( 41) 00:16:08.713 17792.103 - 17897.382: 97.0740% ( 41) 00:16:08.713 17897.382 - 18002.660: 97.4224% ( 35) 00:16:08.713 18002.660 - 18107.939: 97.6413% ( 22) 00:16:08.713 18107.939 - 18213.218: 97.9100% ( 27) 00:16:08.713 18213.218 - 18318.496: 98.0991% ( 19) 00:16:08.713 18318.496 - 18423.775: 98.3778% ( 28) 00:16:08.713 18423.775 - 18529.054: 98.5072% ( 13) 00:16:08.713 18529.054 - 18634.333: 98.6465% ( 14) 00:16:08.713 18634.333 - 18739.611: 98.7261% ( 8) 00:16:08.713 28425.253 - 28635.810: 98.7361% ( 1) 00:16:08.713 28635.810 - 28846.368: 98.7858% ( 5) 00:16:08.713 28846.368 - 29056.925: 98.8555% ( 7) 00:16:08.713 29056.925 - 29267.483: 98.9152% ( 6) 00:16:08.713 29267.483 - 29478.040: 98.9849% ( 7) 00:16:08.713 29478.040 - 29688.598: 99.0645% ( 8) 00:16:08.713 29688.598 - 29899.155: 99.1242% ( 6) 00:16:08.713 29899.155 - 30109.712: 99.1939% ( 7) 00:16:08.713 30109.712 - 30320.270: 99.2635% ( 7) 00:16:08.713 30320.270 - 30530.827: 99.3232% ( 6) 00:16:08.713 30530.827 - 30741.385: 99.3631% ( 4) 00:16:08.713 39163.682 - 39374.239: 99.4228% ( 6) 00:16:08.713 39374.239 - 39584.797: 99.4825% ( 6) 00:16:08.713 39584.797 - 39795.354: 99.5422% ( 6) 00:16:08.713 39795.354 - 40005.912: 99.6019% ( 6) 00:16:08.713 40005.912 - 40216.469: 99.6716% ( 7) 00:16:08.713 40216.469 - 40427.027: 99.7313% ( 6) 00:16:08.713 40427.027 - 40637.584: 99.7910% ( 6) 00:16:08.713 40637.584 - 40848.141: 99.8607% ( 7) 00:16:08.713 40848.141 - 41058.699: 99.9303% ( 7) 00:16:08.713 41058.699 - 41269.256: 99.9900% ( 6) 00:16:08.713 41269.256 - 41479.814: 100.0000% ( 1) 00:16:08.713 00:16:08.713 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:16:08.713 ============================================================================== 00:16:08.713 Range in us Cumulative IO count 00:16:08.713 8738.133 - 8790.773: 0.0198% ( 2) 00:16:08.713 8790.773 - 8843.412: 0.0692% ( 5) 00:16:08.713 8843.412 - 8896.051: 0.1286% ( 6) 00:16:08.713 8896.051 - 8948.691: 0.2176% ( 9) 00:16:08.713 8948.691 - 9001.330: 0.3362% ( 12) 00:16:08.713 9001.330 - 9053.969: 0.6725% ( 34) 00:16:08.713 9053.969 - 9106.609: 0.9494% ( 28) 00:16:08.713 9106.609 - 9159.248: 1.4142% ( 47) 00:16:08.713 9159.248 - 9211.888: 2.0075% ( 60) 00:16:08.713 9211.888 - 9264.527: 2.3536% ( 35) 00:16:08.713 9264.527 - 9317.166: 2.8283% ( 48) 00:16:08.713 9317.166 - 9369.806: 3.2437% ( 42) 00:16:08.713 9369.806 - 9422.445: 3.5700% ( 33) 00:16:08.713 9422.445 - 9475.084: 4.1238% ( 56) 00:16:08.713 9475.084 - 9527.724: 4.3809% ( 26) 00:16:08.713 9527.724 - 9580.363: 4.7271% ( 35) 00:16:08.713 9580.363 - 9633.002: 5.3006% ( 58) 00:16:08.713 9633.002 - 9685.642: 5.8841% ( 59) 00:16:08.713 9685.642 - 9738.281: 6.8137% ( 94) 00:16:08.713 9738.281 - 9790.920: 7.5949% ( 79) 00:16:08.713 9790.920 - 9843.560: 8.2773% ( 69) 00:16:08.713 9843.560 - 9896.199: 8.8014% ( 53) 00:16:08.713 9896.199 - 9948.839: 9.3552% ( 56) 00:16:08.713 9948.839 - 10001.478: 9.9486% ( 60) 00:16:08.713 10001.478 - 10054.117: 10.6903% ( 75) 00:16:08.713 10054.117 - 10106.757: 11.4320% ( 75) 00:16:08.714 10106.757 - 10159.396: 12.3714% ( 95) 00:16:08.714 10159.396 - 10212.035: 13.1428% ( 78) 00:16:08.714 10212.035 - 10264.675: 14.0131% ( 88) 00:16:08.714 10264.675 - 10317.314: 15.0514% ( 105) 00:16:08.714 10317.314 - 10369.953: 15.7338% ( 69) 00:16:08.714 10369.953 - 10422.593: 16.3568% ( 63) 00:16:08.714 10422.593 - 10475.232: 16.9996% ( 65) 00:16:08.714 10475.232 - 10527.871: 17.4051% ( 41) 00:16:08.714 10527.871 - 10580.511: 17.8402% ( 44) 00:16:08.714 10580.511 - 10633.150: 18.3940% ( 56) 00:16:08.714 10633.150 - 10685.790: 18.6709% ( 28) 00:16:08.714 10685.790 - 10738.429: 18.9873% ( 32) 00:16:08.714 10738.429 - 10791.068: 19.4225% ( 44) 00:16:08.714 10791.068 - 10843.708: 19.7983% ( 38) 00:16:08.714 10843.708 - 10896.347: 20.2136% ( 42) 00:16:08.714 10896.347 - 10948.986: 20.8762% ( 67) 00:16:08.714 10948.986 - 11001.626: 21.7761% ( 91) 00:16:08.714 11001.626 - 11054.265: 22.9727% ( 121) 00:16:08.714 11054.265 - 11106.904: 24.2682% ( 131) 00:16:08.714 11106.904 - 11159.544: 26.2658% ( 202) 00:16:08.714 11159.544 - 11212.183: 28.0162% ( 177) 00:16:08.714 11212.183 - 11264.822: 30.1028% ( 211) 00:16:08.714 11264.822 - 11317.462: 32.2191% ( 214) 00:16:08.714 11317.462 - 11370.101: 33.7915% ( 159) 00:16:08.714 11370.101 - 11422.741: 35.3244% ( 155) 00:16:08.714 11422.741 - 11475.380: 36.7484% ( 144) 00:16:08.714 11475.380 - 11528.019: 37.8362% ( 110) 00:16:08.714 11528.019 - 11580.659: 38.9043% ( 108) 00:16:08.714 11580.659 - 11633.298: 39.8734% ( 98) 00:16:08.714 11633.298 - 11685.937: 40.8129% ( 95) 00:16:08.714 11685.937 - 11738.577: 41.5546% ( 75) 00:16:08.714 11738.577 - 11791.216: 42.6028% ( 106) 00:16:08.714 11791.216 - 11843.855: 44.0467% ( 146) 00:16:08.714 11843.855 - 11896.495: 45.2037% ( 117) 00:16:08.714 11896.495 - 11949.134: 46.7860% ( 160) 00:16:08.714 11949.134 - 12001.773: 48.4276% ( 166) 00:16:08.714 12001.773 - 12054.413: 50.4648% ( 206) 00:16:08.714 12054.413 - 12107.052: 52.2844% ( 184) 00:16:08.714 12107.052 - 12159.692: 54.6381% ( 238) 00:16:08.714 12159.692 - 12212.331: 56.5467% ( 193) 00:16:08.714 12212.331 - 12264.970: 58.3366% ( 181) 00:16:08.714 12264.970 - 12317.610: 60.1365% ( 182) 00:16:08.714 12317.610 - 12370.249: 61.6100% ( 149) 00:16:08.714 12370.249 - 12422.888: 62.7769% ( 118) 00:16:08.714 12422.888 - 12475.528: 63.7658% ( 100) 00:16:08.714 12475.528 - 12528.167: 64.5767% ( 82) 00:16:08.714 12528.167 - 12580.806: 65.3283% ( 76) 00:16:08.714 12580.806 - 12633.446: 66.3370% ( 102) 00:16:08.714 12633.446 - 12686.085: 67.1578% ( 83) 00:16:08.714 12686.085 - 12738.724: 67.7017% ( 55) 00:16:08.714 12738.724 - 12791.364: 68.2259% ( 53) 00:16:08.714 12791.364 - 12844.003: 68.7302% ( 51) 00:16:08.714 12844.003 - 12896.643: 69.2939% ( 57) 00:16:08.714 12896.643 - 12949.282: 69.6203% ( 33) 00:16:08.714 12949.282 - 13001.921: 69.8378% ( 22) 00:16:08.714 13001.921 - 13054.561: 70.0752% ( 24) 00:16:08.714 13054.561 - 13107.200: 70.2729% ( 20) 00:16:08.714 13107.200 - 13159.839: 70.5400% ( 27) 00:16:08.714 13159.839 - 13212.479: 70.7773% ( 24) 00:16:08.714 13212.479 - 13265.118: 70.9850% ( 21) 00:16:08.714 13265.118 - 13317.757: 71.3113% ( 33) 00:16:08.714 13317.757 - 13370.397: 71.8354% ( 53) 00:16:08.714 13370.397 - 13423.036: 72.1519% ( 32) 00:16:08.714 13423.036 - 13475.676: 72.4288% ( 28) 00:16:08.714 13475.676 - 13580.954: 72.9727% ( 55) 00:16:08.714 13580.954 - 13686.233: 73.4375% ( 47) 00:16:08.714 13686.233 - 13791.512: 74.1990% ( 77) 00:16:08.714 13791.512 - 13896.790: 75.3362% ( 115) 00:16:08.714 13896.790 - 14002.069: 76.0483% ( 72) 00:16:08.714 14002.069 - 14107.348: 76.5823% ( 54) 00:16:08.714 14107.348 - 14212.627: 77.6108% ( 104) 00:16:08.714 14212.627 - 14317.905: 78.5700% ( 97) 00:16:08.714 14317.905 - 14423.184: 79.6974% ( 114) 00:16:08.714 14423.184 - 14528.463: 80.5380% ( 85) 00:16:08.714 14528.463 - 14633.741: 81.2401% ( 71) 00:16:08.714 14633.741 - 14739.020: 81.8532% ( 62) 00:16:08.714 14739.020 - 14844.299: 82.6839% ( 84) 00:16:08.714 14844.299 - 14949.578: 83.0993% ( 42) 00:16:08.714 14949.578 - 15054.856: 83.4751% ( 38) 00:16:08.714 15054.856 - 15160.135: 84.0289% ( 56) 00:16:08.714 15160.135 - 15265.414: 84.7013% ( 68) 00:16:08.714 15265.414 - 15370.692: 85.3343% ( 64) 00:16:08.714 15370.692 - 15475.971: 86.0562% ( 73) 00:16:08.714 15475.971 - 15581.250: 86.8770% ( 83) 00:16:08.714 15581.250 - 15686.529: 88.0637% ( 120) 00:16:08.714 15686.529 - 15791.807: 89.2207% ( 117) 00:16:08.714 15791.807 - 15897.086: 90.0218% ( 81) 00:16:08.714 15897.086 - 16002.365: 90.5953% ( 58) 00:16:08.714 16002.365 - 16107.643: 91.3469% ( 76) 00:16:08.714 16107.643 - 16212.922: 91.8710% ( 53) 00:16:08.714 16212.922 - 16318.201: 92.3161% ( 45) 00:16:08.714 16318.201 - 16423.480: 92.7710% ( 46) 00:16:08.714 16423.480 - 16528.758: 93.1962% ( 43) 00:16:08.714 16528.758 - 16634.037: 93.4830% ( 29) 00:16:08.714 16634.037 - 16739.316: 93.7797% ( 30) 00:16:08.714 16739.316 - 16844.594: 94.1752% ( 40) 00:16:08.714 16844.594 - 16949.873: 94.3730% ( 20) 00:16:08.714 16949.873 - 17055.152: 94.5115% ( 14) 00:16:08.714 17055.152 - 17160.431: 94.6499% ( 14) 00:16:08.714 17160.431 - 17265.709: 94.7983% ( 15) 00:16:08.714 17265.709 - 17370.988: 94.9565% ( 16) 00:16:08.714 17370.988 - 17476.267: 95.2927% ( 34) 00:16:08.714 17476.267 - 17581.545: 95.6191% ( 33) 00:16:08.714 17581.545 - 17686.824: 95.9454% ( 33) 00:16:08.714 17686.824 - 17792.103: 96.3509% ( 41) 00:16:08.714 17792.103 - 17897.382: 96.7761% ( 43) 00:16:08.714 17897.382 - 18002.660: 97.2310% ( 46) 00:16:08.714 18002.660 - 18107.939: 97.5574% ( 33) 00:16:08.714 18107.939 - 18213.218: 97.8244% ( 27) 00:16:08.714 18213.218 - 18318.496: 97.9727% ( 15) 00:16:08.714 18318.496 - 18423.775: 98.0617% ( 9) 00:16:08.714 18423.775 - 18529.054: 98.2199% ( 16) 00:16:08.714 18529.054 - 18634.333: 98.3979% ( 18) 00:16:08.714 18634.333 - 18739.611: 98.8133% ( 42) 00:16:08.714 18739.611 - 18844.890: 98.9715% ( 16) 00:16:08.714 18844.890 - 18950.169: 99.0803% ( 11) 00:16:08.714 18950.169 - 19055.447: 99.1100% ( 3) 00:16:08.714 19055.447 - 19160.726: 99.1495% ( 4) 00:16:08.714 19160.726 - 19266.005: 99.1891% ( 4) 00:16:08.714 19266.005 - 19371.284: 99.2188% ( 3) 00:16:08.714 19371.284 - 19476.562: 99.2583% ( 4) 00:16:08.714 19476.562 - 19581.841: 99.2880% ( 3) 00:16:08.714 19581.841 - 19687.120: 99.3176% ( 3) 00:16:08.714 19687.120 - 19792.398: 99.3572% ( 4) 00:16:08.714 19792.398 - 19897.677: 99.3671% ( 1) 00:16:08.714 28425.253 - 28635.810: 99.4165% ( 5) 00:16:08.714 28635.810 - 28846.368: 99.4858% ( 7) 00:16:08.714 28846.368 - 29056.925: 99.5550% ( 7) 00:16:08.714 29056.925 - 29267.483: 99.6242% ( 7) 00:16:08.714 29267.483 - 29478.040: 99.6835% ( 6) 00:16:08.714 29478.040 - 29688.598: 99.7429% ( 6) 00:16:08.714 29688.598 - 29899.155: 99.8022% ( 6) 00:16:08.714 29899.155 - 30109.712: 99.8714% ( 7) 00:16:08.714 30109.712 - 30320.270: 99.9308% ( 6) 00:16:08.714 30320.270 - 30530.827: 100.0000% ( 7) 00:16:08.714 00:16:08.714 22:57:35 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:16:08.714 00:16:08.714 real 0m2.743s 00:16:08.714 user 0m2.297s 00:16:08.714 sys 0m0.333s 00:16:08.714 22:57:35 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.714 22:57:35 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:16:08.714 ************************************ 00:16:08.714 END TEST nvme_perf 00:16:08.714 ************************************ 00:16:08.973 22:57:36 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:16:08.973 22:57:36 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:08.973 22:57:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.973 22:57:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.973 ************************************ 00:16:08.973 START TEST nvme_hello_world 00:16:08.973 ************************************ 00:16:08.973 22:57:36 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:16:09.232 Initializing NVMe Controllers 00:16:09.232 Attached to 0000:00:10.0 00:16:09.232 Namespace ID: 1 size: 6GB 00:16:09.232 Attached to 0000:00:11.0 00:16:09.232 Namespace ID: 1 size: 5GB 00:16:09.232 Attached to 0000:00:13.0 00:16:09.232 Namespace ID: 1 size: 1GB 00:16:09.232 Attached to 0000:00:12.0 00:16:09.232 Namespace ID: 1 size: 4GB 00:16:09.232 Namespace ID: 2 size: 4GB 00:16:09.232 Namespace ID: 3 size: 4GB 00:16:09.232 Initialization complete. 00:16:09.232 INFO: using host memory buffer for IO 00:16:09.232 Hello world! 00:16:09.232 INFO: using host memory buffer for IO 00:16:09.232 Hello world! 00:16:09.232 INFO: using host memory buffer for IO 00:16:09.232 Hello world! 00:16:09.232 INFO: using host memory buffer for IO 00:16:09.232 Hello world! 00:16:09.232 INFO: using host memory buffer for IO 00:16:09.232 Hello world! 00:16:09.232 INFO: using host memory buffer for IO 00:16:09.232 Hello world! 00:16:09.232 00:16:09.232 real 0m0.315s 00:16:09.232 user 0m0.109s 00:16:09.232 sys 0m0.155s 00:16:09.232 22:57:36 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.232 22:57:36 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:09.232 ************************************ 00:16:09.232 END TEST nvme_hello_world 00:16:09.232 ************************************ 00:16:09.232 22:57:36 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:16:09.232 22:57:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.232 22:57:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.232 22:57:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.232 ************************************ 00:16:09.232 START TEST nvme_sgl 00:16:09.232 ************************************ 00:16:09.232 22:57:36 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:16:09.490 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:16:09.490 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:16:09.490 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:16:09.490 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:16:09.490 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:16:09.490 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:16:09.490 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:16:09.490 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:16:09.490 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:16:09.490 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:16:09.490 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:16:09.490 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:16:09.490 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:16:09.490 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:16:09.490 NVMe Readv/Writev Request test 00:16:09.490 Attached to 0000:00:10.0 00:16:09.490 Attached to 0000:00:11.0 00:16:09.490 Attached to 0000:00:13.0 00:16:09.490 Attached to 0000:00:12.0 00:16:09.490 0000:00:10.0: build_io_request_2 test passed 00:16:09.490 0000:00:10.0: build_io_request_4 test passed 00:16:09.490 0000:00:10.0: build_io_request_5 test passed 00:16:09.490 0000:00:10.0: build_io_request_6 test passed 00:16:09.490 0000:00:10.0: build_io_request_7 test passed 00:16:09.490 0000:00:10.0: build_io_request_10 test passed 00:16:09.490 0000:00:11.0: build_io_request_2 test passed 00:16:09.490 0000:00:11.0: build_io_request_4 test passed 00:16:09.490 0000:00:11.0: build_io_request_5 test passed 00:16:09.490 0000:00:11.0: build_io_request_6 test passed 00:16:09.490 0000:00:11.0: build_io_request_7 test passed 00:16:09.490 0000:00:11.0: build_io_request_10 test passed 00:16:09.490 Cleaning up... 00:16:09.490 00:16:09.490 real 0m0.364s 00:16:09.490 user 0m0.176s 00:16:09.490 sys 0m0.144s 00:16:09.490 22:57:36 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.490 22:57:36 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:16:09.490 ************************************ 00:16:09.490 END TEST nvme_sgl 00:16:09.490 ************************************ 00:16:09.748 22:57:36 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:16:09.748 22:57:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.748 22:57:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.748 22:57:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.748 ************************************ 00:16:09.748 START TEST nvme_e2edp 00:16:09.748 ************************************ 00:16:09.748 22:57:36 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:16:10.006 NVMe Write/Read with End-to-End data protection test 00:16:10.007 Attached to 0000:00:10.0 00:16:10.007 Attached to 0000:00:11.0 00:16:10.007 Attached to 0000:00:13.0 00:16:10.007 Attached to 0000:00:12.0 00:16:10.007 Cleaning up... 00:16:10.007 00:16:10.007 real 0m0.315s 00:16:10.007 user 0m0.106s 00:16:10.007 sys 0m0.165s 00:16:10.007 22:57:37 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.007 22:57:37 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:16:10.007 ************************************ 00:16:10.007 END TEST nvme_e2edp 00:16:10.007 ************************************ 00:16:10.007 22:57:37 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:16:10.007 22:57:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.007 22:57:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.007 22:57:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.007 ************************************ 00:16:10.007 START TEST nvme_reserve 00:16:10.007 ************************************ 00:16:10.007 22:57:37 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:16:10.264 ===================================================== 00:16:10.264 NVMe Controller at PCI bus 0, device 16, function 0 00:16:10.264 ===================================================== 00:16:10.264 Reservations: Not Supported 00:16:10.264 ===================================================== 00:16:10.264 NVMe Controller at PCI bus 0, device 17, function 0 00:16:10.264 ===================================================== 00:16:10.264 Reservations: Not Supported 00:16:10.264 ===================================================== 00:16:10.264 NVMe Controller at PCI bus 0, device 19, function 0 00:16:10.264 ===================================================== 00:16:10.264 Reservations: Not Supported 00:16:10.264 ===================================================== 00:16:10.264 NVMe Controller at PCI bus 0, device 18, function 0 00:16:10.264 ===================================================== 00:16:10.264 Reservations: Not Supported 00:16:10.264 Reservation test passed 00:16:10.264 00:16:10.264 real 0m0.325s 00:16:10.264 user 0m0.110s 00:16:10.264 sys 0m0.164s 00:16:10.264 22:57:37 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.264 22:57:37 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:16:10.264 ************************************ 00:16:10.264 END TEST nvme_reserve 00:16:10.264 ************************************ 00:16:10.521 22:57:37 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:16:10.521 22:57:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.521 22:57:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.521 22:57:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.522 ************************************ 00:16:10.522 START TEST nvme_err_injection 00:16:10.522 ************************************ 00:16:10.522 22:57:37 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:16:10.779 NVMe Error Injection test 00:16:10.779 Attached to 0000:00:10.0 00:16:10.779 Attached to 0000:00:11.0 00:16:10.779 Attached to 0000:00:13.0 00:16:10.779 Attached to 0000:00:12.0 00:16:10.779 0000:00:12.0: get features failed as expected 00:16:10.779 0000:00:10.0: get features failed as expected 00:16:10.779 0000:00:11.0: get features failed as expected 00:16:10.779 0000:00:13.0: get features failed as expected 00:16:10.779 0000:00:10.0: get features successfully as expected 00:16:10.779 0000:00:11.0: get features successfully as expected 00:16:10.779 0000:00:13.0: get features successfully as expected 00:16:10.779 0000:00:12.0: get features successfully as expected 00:16:10.779 0000:00:11.0: read failed as expected 00:16:10.779 0000:00:13.0: read failed as expected 00:16:10.779 0000:00:10.0: read failed as expected 00:16:10.779 0000:00:12.0: read failed as expected 00:16:10.779 0000:00:10.0: read successfully as expected 00:16:10.779 0000:00:11.0: read successfully as expected 00:16:10.779 0000:00:13.0: read successfully as expected 00:16:10.779 0000:00:12.0: read successfully as expected 00:16:10.779 Cleaning up... 00:16:10.779 00:16:10.779 real 0m0.339s 00:16:10.779 user 0m0.124s 00:16:10.779 sys 0m0.165s 00:16:10.779 22:57:38 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.779 22:57:38 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:16:10.779 ************************************ 00:16:10.779 END TEST nvme_err_injection 00:16:10.779 ************************************ 00:16:10.779 22:57:38 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:10.779 22:57:38 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:16:10.779 22:57:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.779 22:57:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.779 ************************************ 00:16:10.779 START TEST nvme_overhead 00:16:10.779 ************************************ 00:16:10.779 22:57:38 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:16:12.152 Initializing NVMe Controllers 00:16:12.152 Attached to 0000:00:10.0 00:16:12.152 Attached to 0000:00:11.0 00:16:12.152 Attached to 0000:00:13.0 00:16:12.152 Attached to 0000:00:12.0 00:16:12.152 Initialization complete. Launching workers. 00:16:12.152 submit (in ns) avg, min, max = 15312.6, 11412.9, 145438.6 00:16:12.152 complete (in ns) avg, min, max = 9387.8, 7717.3, 35361.4 00:16:12.152 00:16:12.152 Submit histogram 00:16:12.152 ================ 00:16:12.152 Range in us Cumulative Count 00:16:12.152 11.412 - 11.463: 0.0155% ( 1) 00:16:12.152 11.875 - 11.926: 0.0310% ( 1) 00:16:12.152 12.183 - 12.235: 0.0464% ( 1) 00:16:12.152 12.851 - 12.903: 0.0619% ( 1) 00:16:12.152 12.954 - 13.006: 0.0774% ( 1) 00:16:12.152 13.006 - 13.057: 0.1238% ( 3) 00:16:12.152 13.057 - 13.108: 0.2322% ( 7) 00:16:12.152 13.108 - 13.160: 0.4643% ( 15) 00:16:12.152 13.160 - 13.263: 1.8109% ( 87) 00:16:12.152 13.263 - 13.365: 4.5194% ( 175) 00:16:12.152 13.365 - 13.468: 7.9399% ( 221) 00:16:12.152 13.468 - 13.571: 13.8833% ( 384) 00:16:12.152 13.571 - 13.674: 21.9161% ( 519) 00:16:12.152 13.674 - 13.777: 30.2120% ( 536) 00:16:12.152 13.777 - 13.880: 37.9817% ( 502) 00:16:12.152 13.880 - 13.982: 44.9930% ( 453) 00:16:12.152 13.982 - 14.085: 51.2305% ( 403) 00:16:12.152 14.085 - 14.188: 56.3690% ( 332) 00:16:12.152 14.188 - 14.291: 60.6562% ( 277) 00:16:12.152 14.291 - 14.394: 64.3244% ( 237) 00:16:12.152 14.394 - 14.496: 67.0794% ( 178) 00:16:12.152 14.496 - 14.599: 69.3546% ( 147) 00:16:12.152 14.599 - 14.702: 71.3976% ( 132) 00:16:12.152 14.702 - 14.805: 72.9763% ( 102) 00:16:12.152 14.805 - 14.908: 74.1371% ( 75) 00:16:12.152 14.908 - 15.010: 75.2515% ( 72) 00:16:12.152 15.010 - 15.113: 75.8706% ( 40) 00:16:12.152 15.113 - 15.216: 76.3968% ( 34) 00:16:12.152 15.216 - 15.319: 76.7219% ( 21) 00:16:12.152 15.319 - 15.422: 76.9540% ( 15) 00:16:12.152 15.422 - 15.524: 77.1862% ( 15) 00:16:12.152 15.524 - 15.627: 77.2945% ( 7) 00:16:12.152 15.627 - 15.730: 77.3410% ( 3) 00:16:12.152 15.730 - 15.833: 77.3719% ( 2) 00:16:12.152 15.833 - 15.936: 77.4029% ( 2) 00:16:12.152 15.936 - 16.039: 77.4648% ( 4) 00:16:12.152 16.039 - 16.141: 77.4957% ( 2) 00:16:12.152 16.141 - 16.244: 77.5577% ( 4) 00:16:12.152 16.244 - 16.347: 77.6350% ( 5) 00:16:12.152 16.347 - 16.450: 77.7279% ( 6) 00:16:12.152 16.450 - 16.553: 77.7434% ( 1) 00:16:12.152 16.553 - 16.655: 77.8208% ( 5) 00:16:12.152 16.655 - 16.758: 77.9291% ( 7) 00:16:12.152 16.758 - 16.861: 78.0220% ( 6) 00:16:12.152 16.861 - 16.964: 78.2387% ( 14) 00:16:12.152 16.964 - 17.067: 78.4863% ( 16) 00:16:12.153 17.067 - 17.169: 78.6411% ( 10) 00:16:12.153 17.169 - 17.272: 79.0280% ( 25) 00:16:12.153 17.272 - 17.375: 79.3530% ( 21) 00:16:12.153 17.375 - 17.478: 79.8328% ( 31) 00:16:12.153 17.478 - 17.581: 80.3900% ( 36) 00:16:12.153 17.581 - 17.684: 80.7151% ( 21) 00:16:12.153 17.684 - 17.786: 81.3496% ( 41) 00:16:12.153 17.786 - 17.889: 81.8294% ( 31) 00:16:12.153 17.889 - 17.992: 82.5414% ( 46) 00:16:12.153 17.992 - 18.095: 83.2688% ( 47) 00:16:12.153 18.095 - 18.198: 83.8260% ( 36) 00:16:12.153 18.198 - 18.300: 84.6154% ( 51) 00:16:12.153 18.300 - 18.403: 85.2190% ( 39) 00:16:12.153 18.403 - 18.506: 86.1322% ( 59) 00:16:12.153 18.506 - 18.609: 86.8906% ( 49) 00:16:12.153 18.609 - 18.712: 87.3859% ( 32) 00:16:12.153 18.712 - 18.814: 88.0823% ( 45) 00:16:12.153 18.814 - 18.917: 88.6395% ( 36) 00:16:12.153 18.917 - 19.020: 89.1503% ( 33) 00:16:12.153 19.020 - 19.123: 89.6610% ( 33) 00:16:12.153 19.123 - 19.226: 90.3730% ( 46) 00:16:12.153 19.226 - 19.329: 91.1469% ( 50) 00:16:12.153 19.329 - 19.431: 91.6112% ( 30) 00:16:12.153 19.431 - 19.534: 92.1374% ( 34) 00:16:12.153 19.534 - 19.637: 92.5399% ( 26) 00:16:12.153 19.637 - 19.740: 92.8030% ( 17) 00:16:12.153 19.740 - 19.843: 93.2209% ( 27) 00:16:12.153 19.843 - 19.945: 93.5459% ( 21) 00:16:12.153 19.945 - 20.048: 93.8554% ( 20) 00:16:12.153 20.048 - 20.151: 94.1959% ( 22) 00:16:12.153 20.151 - 20.254: 94.4900% ( 19) 00:16:12.153 20.254 - 20.357: 94.7531% ( 17) 00:16:12.153 20.357 - 20.459: 94.9543% ( 13) 00:16:12.153 20.459 - 20.562: 95.1555% ( 13) 00:16:12.153 20.562 - 20.665: 95.4341% ( 18) 00:16:12.153 20.665 - 20.768: 95.6663% ( 15) 00:16:12.153 20.768 - 20.871: 95.8675% ( 13) 00:16:12.153 20.871 - 20.973: 96.0223% ( 10) 00:16:12.153 20.973 - 21.076: 96.1771% ( 10) 00:16:12.153 21.076 - 21.179: 96.3473% ( 11) 00:16:12.153 21.179 - 21.282: 96.4711% ( 8) 00:16:12.153 21.282 - 21.385: 96.6259% ( 10) 00:16:12.153 21.385 - 21.488: 96.7497% ( 8) 00:16:12.153 21.488 - 21.590: 96.8426% ( 6) 00:16:12.153 21.590 - 21.693: 96.9200% ( 5) 00:16:12.153 21.693 - 21.796: 97.0283% ( 7) 00:16:12.153 21.796 - 21.899: 97.2141% ( 12) 00:16:12.153 21.899 - 22.002: 97.2605% ( 3) 00:16:12.153 22.002 - 22.104: 97.3069% ( 3) 00:16:12.153 22.104 - 22.207: 97.4153% ( 7) 00:16:12.153 22.207 - 22.310: 97.4926% ( 5) 00:16:12.153 22.310 - 22.413: 97.5236% ( 2) 00:16:12.153 22.413 - 22.516: 97.5546% ( 2) 00:16:12.153 22.516 - 22.618: 97.6010% ( 3) 00:16:12.153 22.618 - 22.721: 97.6939% ( 6) 00:16:12.153 22.721 - 22.824: 97.7093% ( 1) 00:16:12.153 22.824 - 22.927: 97.7558% ( 3) 00:16:12.153 22.927 - 23.030: 97.8177% ( 4) 00:16:12.153 23.030 - 23.133: 97.8951% ( 5) 00:16:12.153 23.133 - 23.235: 97.9415% ( 3) 00:16:12.153 23.235 - 23.338: 97.9879% ( 3) 00:16:12.153 23.338 - 23.441: 98.0189% ( 2) 00:16:12.153 23.441 - 23.544: 98.0498% ( 2) 00:16:12.153 23.544 - 23.647: 98.0808% ( 2) 00:16:12.153 23.647 - 23.749: 98.1427% ( 4) 00:16:12.153 23.749 - 23.852: 98.2046% ( 4) 00:16:12.153 23.852 - 23.955: 98.2820% ( 5) 00:16:12.153 24.058 - 24.161: 98.2975% ( 1) 00:16:12.153 24.161 - 24.263: 98.3130% ( 1) 00:16:12.153 24.263 - 24.366: 98.3284% ( 1) 00:16:12.153 24.572 - 24.675: 98.3439% ( 1) 00:16:12.153 24.675 - 24.778: 98.3594% ( 1) 00:16:12.153 24.778 - 24.880: 98.3749% ( 1) 00:16:12.153 24.880 - 24.983: 98.3903% ( 1) 00:16:12.153 24.983 - 25.086: 98.4213% ( 2) 00:16:12.153 25.086 - 25.189: 98.4368% ( 1) 00:16:12.153 25.292 - 25.394: 98.4523% ( 1) 00:16:12.153 25.497 - 25.600: 98.4832% ( 2) 00:16:12.153 25.600 - 25.703: 98.5142% ( 2) 00:16:12.153 25.703 - 25.806: 98.5296% ( 1) 00:16:12.153 25.806 - 25.908: 98.5761% ( 3) 00:16:12.153 25.908 - 26.011: 98.6380% ( 4) 00:16:12.153 26.114 - 26.217: 98.6689% ( 2) 00:16:12.153 26.217 - 26.320: 98.6844% ( 1) 00:16:12.153 26.320 - 26.525: 98.7928% ( 7) 00:16:12.153 26.525 - 26.731: 98.8701% ( 5) 00:16:12.153 26.731 - 26.937: 98.9321% ( 4) 00:16:12.153 26.937 - 27.142: 98.9630% ( 2) 00:16:12.153 27.142 - 27.348: 98.9940% ( 2) 00:16:12.153 27.348 - 27.553: 99.0249% ( 2) 00:16:12.153 27.553 - 27.759: 99.0868% ( 4) 00:16:12.153 27.759 - 27.965: 99.1333% ( 3) 00:16:12.153 27.965 - 28.170: 99.1487% ( 1) 00:16:12.153 28.170 - 28.376: 99.1797% ( 2) 00:16:12.153 28.376 - 28.582: 99.1952% ( 1) 00:16:12.153 28.582 - 28.787: 99.2261% ( 2) 00:16:12.153 28.787 - 28.993: 99.2726% ( 3) 00:16:12.153 28.993 - 29.198: 99.2880% ( 1) 00:16:12.153 29.404 - 29.610: 99.3190% ( 2) 00:16:12.153 29.610 - 29.815: 99.3654% ( 3) 00:16:12.153 29.815 - 30.021: 99.3964% ( 2) 00:16:12.153 30.021 - 30.227: 99.4428% ( 3) 00:16:12.153 30.227 - 30.432: 99.4738% ( 2) 00:16:12.153 30.432 - 30.638: 99.5047% ( 2) 00:16:12.153 30.638 - 30.843: 99.5357% ( 2) 00:16:12.153 30.843 - 31.049: 99.5512% ( 1) 00:16:12.153 31.049 - 31.255: 99.5666% ( 1) 00:16:12.153 31.255 - 31.460: 99.6131% ( 3) 00:16:12.153 31.666 - 31.871: 99.6285% ( 1) 00:16:12.153 31.871 - 32.077: 99.6440% ( 1) 00:16:12.153 32.077 - 32.283: 99.6750% ( 2) 00:16:12.153 32.283 - 32.488: 99.7059% ( 2) 00:16:12.153 32.488 - 32.694: 99.7214% ( 1) 00:16:12.153 32.900 - 33.105: 99.7369% ( 1) 00:16:12.153 33.105 - 33.311: 99.7678% ( 2) 00:16:12.153 33.311 - 33.516: 99.8143% ( 3) 00:16:12.153 33.928 - 34.133: 99.8297% ( 1) 00:16:12.153 34.133 - 34.339: 99.8607% ( 2) 00:16:12.153 34.750 - 34.956: 99.8762% ( 1) 00:16:12.153 35.161 - 35.367: 99.8917% ( 1) 00:16:12.153 36.601 - 36.806: 99.9071% ( 1) 00:16:12.153 38.451 - 38.657: 99.9226% ( 1) 00:16:12.153 40.919 - 41.124: 99.9381% ( 1) 00:16:12.153 43.592 - 43.798: 99.9536% ( 1) 00:16:12.153 45.854 - 46.059: 99.9690% ( 1) 00:16:12.153 84.305 - 84.716: 99.9845% ( 1) 00:16:12.153 144.758 - 145.581: 100.0000% ( 1) 00:16:12.153 00:16:12.153 Complete histogram 00:16:12.153 ================== 00:16:12.153 Range in us Cumulative Count 00:16:12.153 7.711 - 7.762: 0.3715% ( 24) 00:16:12.153 7.762 - 7.814: 2.7086% ( 151) 00:16:12.153 7.814 - 7.865: 8.0019% ( 342) 00:16:12.153 7.865 - 7.916: 15.2298% ( 467) 00:16:12.153 7.916 - 7.968: 21.9161% ( 432) 00:16:12.153 7.968 - 8.019: 26.3891% ( 289) 00:16:12.153 8.019 - 8.071: 30.0418% ( 236) 00:16:12.153 8.071 - 8.122: 32.5491% ( 162) 00:16:12.153 8.122 - 8.173: 34.9172% ( 153) 00:16:12.153 8.173 - 8.225: 37.0221% ( 136) 00:16:12.153 8.225 - 8.276: 40.0712% ( 197) 00:16:12.153 8.276 - 8.328: 42.0987% ( 131) 00:16:12.153 8.328 - 8.379: 44.3585% ( 146) 00:16:12.153 8.379 - 8.431: 45.9526% ( 103) 00:16:12.153 8.431 - 8.482: 47.4230% ( 95) 00:16:12.153 8.482 - 8.533: 49.7446% ( 150) 00:16:12.153 8.533 - 8.585: 52.2675% ( 163) 00:16:12.153 8.585 - 8.636: 54.6510% ( 154) 00:16:12.153 8.636 - 8.688: 57.2202% ( 166) 00:16:12.153 8.688 - 8.739: 59.6038% ( 154) 00:16:12.153 8.739 - 8.790: 62.0183% ( 156) 00:16:12.153 8.790 - 8.842: 64.4792% ( 159) 00:16:12.153 8.842 - 8.893: 66.3365% ( 120) 00:16:12.153 8.893 - 8.945: 68.1164% ( 115) 00:16:12.153 8.945 - 8.996: 69.6796% ( 101) 00:16:12.153 8.996 - 9.047: 71.2119% ( 99) 00:16:12.153 9.047 - 9.099: 72.1405% ( 60) 00:16:12.153 9.099 - 9.150: 73.2085% ( 69) 00:16:12.153 9.150 - 9.202: 74.0443% ( 54) 00:16:12.153 9.202 - 9.253: 74.7562% ( 46) 00:16:12.153 9.253 - 9.304: 75.3908% ( 41) 00:16:12.153 9.304 - 9.356: 76.0873% ( 45) 00:16:12.153 9.356 - 9.407: 76.7373% ( 42) 00:16:12.153 9.407 - 9.459: 77.1552% ( 27) 00:16:12.153 9.459 - 9.510: 77.4803% ( 21) 00:16:12.153 9.510 - 9.561: 77.6815% ( 13) 00:16:12.153 9.561 - 9.613: 77.9446% ( 17) 00:16:12.153 9.613 - 9.664: 78.2387% ( 19) 00:16:12.153 9.664 - 9.716: 78.4089% ( 11) 00:16:12.153 9.716 - 9.767: 78.6875% ( 18) 00:16:12.153 9.767 - 9.818: 78.8113% ( 8) 00:16:12.153 9.818 - 9.870: 78.8887% ( 5) 00:16:12.153 9.870 - 9.921: 79.0899% ( 13) 00:16:12.153 9.921 - 9.973: 79.2602% ( 11) 00:16:12.153 9.973 - 10.024: 79.3685% ( 7) 00:16:12.153 10.024 - 10.076: 79.4769% ( 7) 00:16:12.153 10.076 - 10.127: 79.6316% ( 10) 00:16:12.153 10.127 - 10.178: 79.6935% ( 4) 00:16:12.153 10.178 - 10.230: 79.8328% ( 9) 00:16:12.153 10.230 - 10.281: 79.9102% ( 5) 00:16:12.153 10.281 - 10.333: 80.0650% ( 10) 00:16:12.153 10.333 - 10.384: 80.3281% ( 17) 00:16:12.153 10.384 - 10.435: 80.5603% ( 15) 00:16:12.153 10.435 - 10.487: 80.8389% ( 18) 00:16:12.154 10.487 - 10.538: 81.1330% ( 19) 00:16:12.154 10.538 - 10.590: 81.4580% ( 21) 00:16:12.154 10.590 - 10.641: 81.6747% ( 14) 00:16:12.154 10.641 - 10.692: 81.9378% ( 17) 00:16:12.154 10.692 - 10.744: 82.2473% ( 20) 00:16:12.154 10.744 - 10.795: 82.5414% ( 19) 00:16:12.154 10.795 - 10.847: 82.8355% ( 19) 00:16:12.154 10.847 - 10.898: 83.0676% ( 15) 00:16:12.154 10.898 - 10.949: 83.4855% ( 27) 00:16:12.154 10.949 - 11.001: 83.8879% ( 26) 00:16:12.154 11.001 - 11.052: 84.1201% ( 15) 00:16:12.154 11.052 - 11.104: 84.4142% ( 19) 00:16:12.154 11.104 - 11.155: 84.7082% ( 19) 00:16:12.154 11.155 - 11.206: 85.0178% ( 20) 00:16:12.154 11.206 - 11.258: 85.3119% ( 19) 00:16:12.154 11.258 - 11.309: 85.5905% ( 18) 00:16:12.154 11.309 - 11.361: 85.8536% ( 17) 00:16:12.154 11.361 - 11.412: 86.1941% ( 22) 00:16:12.154 11.412 - 11.463: 86.4882% ( 19) 00:16:12.154 11.463 - 11.515: 86.7977% ( 20) 00:16:12.154 11.515 - 11.566: 87.0144% ( 14) 00:16:12.154 11.566 - 11.618: 87.3239% ( 20) 00:16:12.154 11.618 - 11.669: 87.6025% ( 18) 00:16:12.154 11.669 - 11.720: 87.9430% ( 22) 00:16:12.154 11.720 - 11.772: 88.2216% ( 18) 00:16:12.154 11.772 - 11.823: 88.4538% ( 15) 00:16:12.154 11.823 - 11.875: 88.8098% ( 23) 00:16:12.154 11.875 - 11.926: 89.0884% ( 18) 00:16:12.154 11.926 - 11.978: 89.3360% ( 16) 00:16:12.154 11.978 - 12.029: 89.6456% ( 20) 00:16:12.154 12.029 - 12.080: 89.9861% ( 22) 00:16:12.154 12.080 - 12.132: 90.2492% ( 17) 00:16:12.154 12.132 - 12.183: 90.5123% ( 17) 00:16:12.154 12.183 - 12.235: 90.7445% ( 15) 00:16:12.154 12.235 - 12.286: 90.9766% ( 15) 00:16:12.154 12.286 - 12.337: 91.1469% ( 11) 00:16:12.154 12.337 - 12.389: 91.3790% ( 15) 00:16:12.154 12.389 - 12.440: 91.5493% ( 11) 00:16:12.154 12.440 - 12.492: 91.7041% ( 10) 00:16:12.154 12.492 - 12.543: 91.8588% ( 10) 00:16:12.154 12.543 - 12.594: 92.1374% ( 18) 00:16:12.154 12.594 - 12.646: 92.2613% ( 8) 00:16:12.154 12.646 - 12.697: 92.4315% ( 11) 00:16:12.154 12.697 - 12.749: 92.6018% ( 11) 00:16:12.154 12.749 - 12.800: 92.7256% ( 8) 00:16:12.154 12.800 - 12.851: 92.8649% ( 9) 00:16:12.154 12.851 - 12.903: 93.0197% ( 10) 00:16:12.154 12.903 - 12.954: 93.1744% ( 10) 00:16:12.154 12.954 - 13.006: 93.2983% ( 8) 00:16:12.154 13.006 - 13.057: 93.5459% ( 16) 00:16:12.154 13.057 - 13.108: 93.7161% ( 11) 00:16:12.154 13.108 - 13.160: 93.7935% ( 5) 00:16:12.154 13.160 - 13.263: 93.9019% ( 7) 00:16:12.154 13.263 - 13.365: 94.1340% ( 15) 00:16:12.154 13.365 - 13.468: 94.4436% ( 20) 00:16:12.154 13.468 - 13.571: 94.6448% ( 13) 00:16:12.154 13.571 - 13.674: 94.8150% ( 11) 00:16:12.154 13.674 - 13.777: 95.0317% ( 14) 00:16:12.154 13.777 - 13.880: 95.1865% ( 10) 00:16:12.154 13.880 - 13.982: 95.4651% ( 18) 00:16:12.154 13.982 - 14.085: 95.6199% ( 10) 00:16:12.154 14.085 - 14.188: 95.8366% ( 14) 00:16:12.154 14.188 - 14.291: 95.9604% ( 8) 00:16:12.154 14.291 - 14.394: 96.1461% ( 12) 00:16:12.154 14.394 - 14.496: 96.3009% ( 10) 00:16:12.154 14.496 - 14.599: 96.4092% ( 7) 00:16:12.154 14.599 - 14.702: 96.4711% ( 4) 00:16:12.154 14.702 - 14.805: 96.5640% ( 6) 00:16:12.154 14.805 - 14.908: 96.6569% ( 6) 00:16:12.154 14.908 - 15.010: 96.7497% ( 6) 00:16:12.154 15.010 - 15.113: 96.8116% ( 4) 00:16:12.154 15.113 - 15.216: 96.9200% ( 7) 00:16:12.154 15.216 - 15.319: 97.0283% ( 7) 00:16:12.154 15.319 - 15.422: 97.0902% ( 4) 00:16:12.154 15.422 - 15.524: 97.1521% ( 4) 00:16:12.154 15.524 - 15.627: 97.2295% ( 5) 00:16:12.154 15.627 - 15.730: 97.2605% ( 2) 00:16:12.154 15.730 - 15.833: 97.3069% ( 3) 00:16:12.154 15.936 - 16.039: 97.3379% ( 2) 00:16:12.154 16.039 - 16.141: 97.3998% ( 4) 00:16:12.154 16.141 - 16.244: 97.4307% ( 2) 00:16:12.154 16.244 - 16.347: 97.4772% ( 3) 00:16:12.154 16.347 - 16.450: 97.5391% ( 4) 00:16:12.154 16.450 - 16.553: 97.5546% ( 1) 00:16:12.154 16.553 - 16.655: 97.6010% ( 3) 00:16:12.154 16.861 - 16.964: 97.6319% ( 2) 00:16:12.154 16.964 - 17.067: 97.6474% ( 1) 00:16:12.154 17.067 - 17.169: 97.6629% ( 1) 00:16:12.154 17.169 - 17.272: 97.6784% ( 1) 00:16:12.154 17.272 - 17.375: 97.6939% ( 1) 00:16:12.154 17.375 - 17.478: 97.7403% ( 3) 00:16:12.154 17.478 - 17.581: 97.7712% ( 2) 00:16:12.154 17.581 - 17.684: 97.8022% ( 2) 00:16:12.154 17.684 - 17.786: 97.8486% ( 3) 00:16:12.154 17.786 - 17.889: 97.8796% ( 2) 00:16:12.154 17.889 - 17.992: 97.9105% ( 2) 00:16:12.154 17.992 - 18.095: 97.9725% ( 4) 00:16:12.154 18.095 - 18.198: 98.0344% ( 4) 00:16:12.154 18.198 - 18.300: 98.0808% ( 3) 00:16:12.154 18.300 - 18.403: 98.1272% ( 3) 00:16:12.154 18.403 - 18.506: 98.1582% ( 2) 00:16:12.154 18.506 - 18.609: 98.1737% ( 1) 00:16:12.154 18.609 - 18.712: 98.1891% ( 1) 00:16:12.154 18.712 - 18.814: 98.2510% ( 4) 00:16:12.154 18.814 - 18.917: 98.2665% ( 1) 00:16:12.154 18.917 - 19.020: 98.3130% ( 3) 00:16:12.154 19.020 - 19.123: 98.3284% ( 1) 00:16:12.154 19.123 - 19.226: 98.3439% ( 1) 00:16:12.154 19.226 - 19.329: 98.4213% ( 5) 00:16:12.154 19.329 - 19.431: 98.4832% ( 4) 00:16:12.154 19.431 - 19.534: 98.5142% ( 2) 00:16:12.154 19.534 - 19.637: 98.5451% ( 2) 00:16:12.154 19.637 - 19.740: 98.5761% ( 2) 00:16:12.154 19.740 - 19.843: 98.6070% ( 2) 00:16:12.154 19.945 - 20.048: 98.6225% ( 1) 00:16:12.154 20.048 - 20.151: 98.6535% ( 2) 00:16:12.154 20.151 - 20.254: 98.6844% ( 2) 00:16:12.154 20.254 - 20.357: 98.8082% ( 8) 00:16:12.154 20.357 - 20.459: 98.8547% ( 3) 00:16:12.154 20.459 - 20.562: 98.8856% ( 2) 00:16:12.154 20.562 - 20.665: 98.9166% ( 2) 00:16:12.154 20.665 - 20.768: 98.9475% ( 2) 00:16:12.154 20.871 - 20.973: 98.9940% ( 3) 00:16:12.154 20.973 - 21.076: 99.0094% ( 1) 00:16:12.154 21.076 - 21.179: 99.0249% ( 1) 00:16:12.154 21.179 - 21.282: 99.0404% ( 1) 00:16:12.154 21.282 - 21.385: 99.0714% ( 2) 00:16:12.154 21.385 - 21.488: 99.1333% ( 4) 00:16:12.154 21.488 - 21.590: 99.1487% ( 1) 00:16:12.154 21.590 - 21.693: 99.1797% ( 2) 00:16:12.154 21.693 - 21.796: 99.1952% ( 1) 00:16:12.154 22.104 - 22.207: 99.2106% ( 1) 00:16:12.154 22.207 - 22.310: 99.2261% ( 1) 00:16:12.154 22.516 - 22.618: 99.2416% ( 1) 00:16:12.154 22.721 - 22.824: 99.2571% ( 1) 00:16:12.154 22.824 - 22.927: 99.2726% ( 1) 00:16:12.154 23.235 - 23.338: 99.3035% ( 2) 00:16:12.154 23.338 - 23.441: 99.3190% ( 1) 00:16:12.154 23.544 - 23.647: 99.3345% ( 1) 00:16:12.154 23.647 - 23.749: 99.3499% ( 1) 00:16:12.154 23.749 - 23.852: 99.3654% ( 1) 00:16:12.154 24.058 - 24.161: 99.3809% ( 1) 00:16:12.154 24.161 - 24.263: 99.3964% ( 1) 00:16:12.154 24.263 - 24.366: 99.4119% ( 1) 00:16:12.154 24.366 - 24.469: 99.4273% ( 1) 00:16:12.154 24.572 - 24.675: 99.4583% ( 2) 00:16:12.154 24.675 - 24.778: 99.4892% ( 2) 00:16:12.154 24.778 - 24.880: 99.5202% ( 2) 00:16:12.154 24.880 - 24.983: 99.5512% ( 2) 00:16:12.154 24.983 - 25.086: 99.5666% ( 1) 00:16:12.154 25.189 - 25.292: 99.5821% ( 1) 00:16:12.154 25.394 - 25.497: 99.5976% ( 1) 00:16:12.154 26.320 - 26.525: 99.6131% ( 1) 00:16:12.154 26.937 - 27.142: 99.6285% ( 1) 00:16:12.154 27.142 - 27.348: 99.6750% ( 3) 00:16:12.154 27.348 - 27.553: 99.7214% ( 3) 00:16:12.154 27.553 - 27.759: 99.7524% ( 2) 00:16:12.154 28.170 - 28.376: 99.7678% ( 1) 00:16:12.154 28.376 - 28.582: 99.7833% ( 1) 00:16:12.154 28.582 - 28.787: 99.7988% ( 1) 00:16:12.154 28.787 - 28.993: 99.8143% ( 1) 00:16:12.154 29.198 - 29.404: 99.8297% ( 1) 00:16:12.154 29.610 - 29.815: 99.8607% ( 2) 00:16:12.154 30.843 - 31.049: 99.8917% ( 2) 00:16:12.154 31.049 - 31.255: 99.9071% ( 1) 00:16:12.154 31.871 - 32.077: 99.9226% ( 1) 00:16:12.154 32.900 - 33.105: 99.9381% ( 1) 00:16:12.154 33.105 - 33.311: 99.9536% ( 1) 00:16:12.154 33.516 - 33.722: 99.9690% ( 1) 00:16:12.154 35.161 - 35.367: 100.0000% ( 2) 00:16:12.154 00:16:12.154 00:16:12.154 real 0m1.336s 00:16:12.154 user 0m1.127s 00:16:12.154 sys 0m0.156s 00:16:12.154 22:57:39 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.154 22:57:39 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:16:12.154 ************************************ 00:16:12.154 END TEST nvme_overhead 00:16:12.154 ************************************ 00:16:12.154 22:57:39 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:16:12.154 22:57:39 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:16:12.155 22:57:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.155 22:57:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.155 ************************************ 00:16:12.155 START TEST nvme_arbitration 00:16:12.155 ************************************ 00:16:12.155 22:57:39 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:16:16.410 Initializing NVMe Controllers 00:16:16.410 Attached to 0000:00:10.0 00:16:16.410 Attached to 0000:00:11.0 00:16:16.410 Attached to 0000:00:13.0 00:16:16.410 Attached to 0000:00:12.0 00:16:16.410 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:16:16.410 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:16:16.410 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:16:16.410 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:16:16.410 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:16:16.410 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:16:16.410 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:16:16.410 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:16:16.410 Initialization complete. Launching workers. 00:16:16.410 Starting thread on core 1 with urgent priority queue 00:16:16.410 Starting thread on core 2 with urgent priority queue 00:16:16.410 Starting thread on core 3 with urgent priority queue 00:16:16.410 Starting thread on core 0 with urgent priority queue 00:16:16.410 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:16:16.410 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:16:16.410 QEMU NVMe Ctrl (12341 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:16:16.410 QEMU NVMe Ctrl (12342 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:16:16.410 QEMU NVMe Ctrl (12343 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:16:16.410 QEMU NVMe Ctrl (12342 ) core 3: 597.33 IO/s 167.41 secs/100000 ios 00:16:16.410 ======================================================== 00:16:16.410 00:16:16.410 00:16:16.410 real 0m3.469s 00:16:16.410 user 0m9.448s 00:16:16.410 sys 0m0.179s 00:16:16.410 22:57:42 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.410 22:57:42 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:16:16.410 ************************************ 00:16:16.410 END TEST nvme_arbitration 00:16:16.410 ************************************ 00:16:16.410 22:57:43 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:16:16.410 22:57:43 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:16.410 22:57:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.410 22:57:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.410 ************************************ 00:16:16.410 START TEST nvme_single_aen 00:16:16.410 ************************************ 00:16:16.410 22:57:43 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:16:16.410 Asynchronous Event Request test 00:16:16.410 Attached to 0000:00:10.0 00:16:16.410 Attached to 0000:00:11.0 00:16:16.410 Attached to 0000:00:13.0 00:16:16.410 Attached to 0000:00:12.0 00:16:16.410 Reset controller to setup AER completions for this process 00:16:16.410 Registering asynchronous event callbacks... 00:16:16.410 Getting orig temperature thresholds of all controllers 00:16:16.410 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:16.410 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:16.410 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:16.410 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:16.410 Setting all controllers temperature threshold low to trigger AER 00:16:16.410 Waiting for all controllers temperature threshold to be set lower 00:16:16.410 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:16.410 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:16:16.410 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:16.410 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:16:16.410 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:16.410 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:16:16.410 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:16.410 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:16:16.410 Waiting for all controllers to trigger AER and reset threshold 00:16:16.410 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:16.410 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:16.410 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:16.410 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:16.410 Cleaning up... 00:16:16.410 00:16:16.410 real 0m0.310s 00:16:16.410 user 0m0.103s 00:16:16.410 sys 0m0.170s 00:16:16.410 22:57:43 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.410 22:57:43 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:16:16.410 ************************************ 00:16:16.410 END TEST nvme_single_aen 00:16:16.410 ************************************ 00:16:16.410 22:57:43 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:16:16.410 22:57:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:16.410 22:57:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.410 22:57:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.410 ************************************ 00:16:16.410 START TEST nvme_doorbell_aers 00:16:16.410 ************************************ 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:16.410 22:57:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:16.668 [2024-12-09 22:57:43.839550] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:26.642 Executing: test_write_invalid_db 00:16:26.642 Waiting for AER completion... 00:16:26.642 Failure: test_write_invalid_db 00:16:26.642 00:16:26.642 Executing: test_invalid_db_write_overflow_sq 00:16:26.642 Waiting for AER completion... 00:16:26.642 Failure: test_invalid_db_write_overflow_sq 00:16:26.642 00:16:26.642 Executing: test_invalid_db_write_overflow_cq 00:16:26.642 Waiting for AER completion... 00:16:26.642 Failure: test_invalid_db_write_overflow_cq 00:16:26.642 00:16:26.642 22:57:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:26.642 22:57:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:16:26.642 [2024-12-09 22:57:53.900030] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:36.624 Executing: test_write_invalid_db 00:16:36.624 Waiting for AER completion... 00:16:36.624 Failure: test_write_invalid_db 00:16:36.624 00:16:36.624 Executing: test_invalid_db_write_overflow_sq 00:16:36.624 Waiting for AER completion... 00:16:36.624 Failure: test_invalid_db_write_overflow_sq 00:16:36.624 00:16:36.624 Executing: test_invalid_db_write_overflow_cq 00:16:36.624 Waiting for AER completion... 00:16:36.624 Failure: test_invalid_db_write_overflow_cq 00:16:36.624 00:16:36.624 22:58:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:36.624 22:58:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:16:36.624 [2024-12-09 22:58:03.955312] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:46.616 Executing: test_write_invalid_db 00:16:46.616 Waiting for AER completion... 00:16:46.616 Failure: test_write_invalid_db 00:16:46.616 00:16:46.616 Executing: test_invalid_db_write_overflow_sq 00:16:46.616 Waiting for AER completion... 00:16:46.616 Failure: test_invalid_db_write_overflow_sq 00:16:46.616 00:16:46.616 Executing: test_invalid_db_write_overflow_cq 00:16:46.616 Waiting for AER completion... 00:16:46.616 Failure: test_invalid_db_write_overflow_cq 00:16:46.616 00:16:46.616 22:58:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:16:46.616 22:58:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:16:46.875 [2024-12-09 22:58:14.019914] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.850 Executing: test_write_invalid_db 00:16:56.850 Waiting for AER completion... 00:16:56.850 Failure: test_write_invalid_db 00:16:56.850 00:16:56.850 Executing: test_invalid_db_write_overflow_sq 00:16:56.850 Waiting for AER completion... 00:16:56.850 Failure: test_invalid_db_write_overflow_sq 00:16:56.850 00:16:56.850 Executing: test_invalid_db_write_overflow_cq 00:16:56.850 Waiting for AER completion... 00:16:56.850 Failure: test_invalid_db_write_overflow_cq 00:16:56.850 00:16:56.850 00:16:56.850 real 0m40.336s 00:16:56.850 user 0m28.389s 00:16:56.850 sys 0m11.560s 00:16:56.850 22:58:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.850 22:58:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:16:56.850 ************************************ 00:16:56.850 END TEST nvme_doorbell_aers 00:16:56.850 ************************************ 00:16:56.850 22:58:23 nvme -- nvme/nvme.sh@97 -- # uname 00:16:56.850 22:58:23 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:16:56.850 22:58:23 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:16:56.850 22:58:23 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:16:56.850 22:58:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.850 22:58:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:56.850 ************************************ 00:16:56.850 START TEST nvme_multi_aen 00:16:56.850 ************************************ 00:16:56.851 22:58:23 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:16:56.851 [2024-12-09 22:58:24.142636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.142742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.142765] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.144691] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.144742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.144761] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.146029] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.146078] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.146096] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.147383] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.147590] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 [2024-12-09 22:58:24.147616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64849) is not found. Dropping the request. 00:16:56.851 Child process pid: 65370 00:16:57.109 [Child] Asynchronous Event Request test 00:16:57.109 [Child] Attached to 0000:00:10.0 00:16:57.109 [Child] Attached to 0000:00:11.0 00:16:57.109 [Child] Attached to 0000:00:13.0 00:16:57.109 [Child] Attached to 0000:00:12.0 00:16:57.109 [Child] Registering asynchronous event callbacks... 00:16:57.109 [Child] Getting orig temperature thresholds of all controllers 00:16:57.109 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.109 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.109 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.109 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.109 [Child] Waiting for all controllers to trigger AER and reset threshold 00:16:57.109 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.109 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.109 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.109 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.109 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.109 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.109 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.109 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.109 [Child] Cleaning up... 00:16:57.392 Asynchronous Event Request test 00:16:57.392 Attached to 0000:00:10.0 00:16:57.392 Attached to 0000:00:11.0 00:16:57.392 Attached to 0000:00:13.0 00:16:57.392 Attached to 0000:00:12.0 00:16:57.392 Reset controller to setup AER completions for this process 00:16:57.392 Registering asynchronous event callbacks... 00:16:57.392 Getting orig temperature thresholds of all controllers 00:16:57.392 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.392 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.392 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.392 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:16:57.392 Setting all controllers temperature threshold low to trigger AER 00:16:57.392 Waiting for all controllers temperature threshold to be set lower 00:16:57.392 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.392 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:16:57.392 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.392 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:16:57.392 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.392 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:16:57.392 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:16:57.392 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:16:57.392 Waiting for all controllers to trigger AER and reset threshold 00:16:57.392 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.392 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.392 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.392 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:16:57.392 Cleaning up... 00:16:57.392 00:16:57.392 real 0m0.680s 00:16:57.392 user 0m0.239s 00:16:57.392 sys 0m0.331s 00:16:57.392 22:58:24 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.392 22:58:24 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:16:57.392 ************************************ 00:16:57.392 END TEST nvme_multi_aen 00:16:57.392 ************************************ 00:16:57.392 22:58:24 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:16:57.392 22:58:24 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:57.392 22:58:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.392 22:58:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.392 ************************************ 00:16:57.392 START TEST nvme_startup 00:16:57.392 ************************************ 00:16:57.392 22:58:24 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:16:57.651 Initializing NVMe Controllers 00:16:57.651 Attached to 0000:00:10.0 00:16:57.651 Attached to 0000:00:11.0 00:16:57.651 Attached to 0000:00:13.0 00:16:57.651 Attached to 0000:00:12.0 00:16:57.651 Initialization complete. 00:16:57.651 Time used:211870.797 (us). 00:16:57.651 00:16:57.651 real 0m0.325s 00:16:57.651 user 0m0.106s 00:16:57.651 sys 0m0.165s 00:16:57.651 22:58:24 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.651 22:58:24 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:16:57.651 ************************************ 00:16:57.651 END TEST nvme_startup 00:16:57.651 ************************************ 00:16:57.651 22:58:24 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:16:57.651 22:58:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.651 22:58:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.651 22:58:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.651 ************************************ 00:16:57.651 START TEST nvme_multi_secondary 00:16:57.651 ************************************ 00:16:57.651 22:58:24 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:16:57.651 22:58:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65426 00:16:57.912 22:58:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:16:57.912 22:58:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65427 00:16:57.912 22:58:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:16:57.912 22:58:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:17:01.203 Initializing NVMe Controllers 00:17:01.203 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:01.203 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:01.203 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:01.203 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:01.203 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:17:01.203 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:17:01.203 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:17:01.203 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:17:01.203 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:17:01.203 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:17:01.203 Initialization complete. Launching workers. 00:17:01.203 ======================================================== 00:17:01.203 Latency(us) 00:17:01.203 Device Information : IOPS MiB/s Average min max 00:17:01.203 PCIE (0000:00:10.0) NSID 1 from core 1: 5211.69 20.36 3067.83 944.96 8244.23 00:17:01.203 PCIE (0000:00:11.0) NSID 1 from core 1: 5211.69 20.36 3069.60 962.70 7821.57 00:17:01.203 PCIE (0000:00:13.0) NSID 1 from core 1: 5211.69 20.36 3069.70 960.84 7853.62 00:17:01.203 PCIE (0000:00:12.0) NSID 1 from core 1: 5211.69 20.36 3069.94 965.80 7808.51 00:17:01.203 PCIE (0000:00:12.0) NSID 2 from core 1: 5211.69 20.36 3070.20 961.78 7990.94 00:17:01.203 PCIE (0000:00:12.0) NSID 3 from core 1: 5217.02 20.38 3067.30 969.56 7431.16 00:17:01.203 ======================================================== 00:17:01.203 Total : 31275.48 122.17 3069.10 944.96 8244.23 00:17:01.203 00:17:01.203 Initializing NVMe Controllers 00:17:01.203 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:01.203 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:01.203 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:01.203 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:01.203 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:17:01.203 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:17:01.203 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:17:01.203 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:17:01.203 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:17:01.203 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:17:01.203 Initialization complete. Launching workers. 00:17:01.203 ======================================================== 00:17:01.203 Latency(us) 00:17:01.203 Device Information : IOPS MiB/s Average min max 00:17:01.203 PCIE (0000:00:10.0) NSID 1 from core 2: 3120.24 12.19 5126.47 1375.30 13656.52 00:17:01.203 PCIE (0000:00:11.0) NSID 1 from core 2: 3120.24 12.19 5127.77 1231.44 12788.94 00:17:01.203 PCIE (0000:00:13.0) NSID 1 from core 2: 3120.24 12.19 5127.77 1257.53 13217.86 00:17:01.203 PCIE (0000:00:12.0) NSID 1 from core 2: 3120.24 12.19 5126.96 1461.75 13332.64 00:17:01.203 PCIE (0000:00:12.0) NSID 2 from core 2: 3120.24 12.19 5127.95 1271.39 12924.60 00:17:01.203 PCIE (0000:00:12.0) NSID 3 from core 2: 3120.24 12.19 5127.39 1374.27 12920.34 00:17:01.203 ======================================================== 00:17:01.203 Total : 18721.44 73.13 5127.38 1231.44 13656.52 00:17:01.203 00:17:01.462 22:58:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65426 00:17:03.364 Initializing NVMe Controllers 00:17:03.364 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:03.364 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:03.364 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:03.364 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:03.364 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:03.364 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:17:03.364 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:17:03.364 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:17:03.364 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:17:03.364 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:17:03.364 Initialization complete. Launching workers. 00:17:03.364 ======================================================== 00:17:03.364 Latency(us) 00:17:03.364 Device Information : IOPS MiB/s Average min max 00:17:03.364 PCIE (0000:00:10.0) NSID 1 from core 0: 8472.77 33.10 1886.88 945.49 8789.14 00:17:03.364 PCIE (0000:00:11.0) NSID 1 from core 0: 8472.77 33.10 1887.93 968.48 9444.24 00:17:03.364 PCIE (0000:00:13.0) NSID 1 from core 0: 8472.77 33.10 1887.86 890.02 8884.38 00:17:03.364 PCIE (0000:00:12.0) NSID 1 from core 0: 8472.77 33.10 1887.80 846.29 8536.08 00:17:03.364 PCIE (0000:00:12.0) NSID 2 from core 0: 8472.77 33.10 1887.74 818.92 8117.83 00:17:03.364 PCIE (0000:00:12.0) NSID 3 from core 0: 8472.77 33.10 1887.67 761.68 8577.74 00:17:03.364 ======================================================== 00:17:03.364 Total : 50836.65 198.58 1887.65 761.68 9444.24 00:17:03.364 00:17:03.364 22:58:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65427 00:17:03.364 22:58:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65497 00:17:03.364 22:58:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:17:03.364 22:58:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65498 00:17:03.364 22:58:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:17:03.364 22:58:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:17:06.653 Initializing NVMe Controllers 00:17:06.653 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:06.653 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:06.653 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:06.653 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:06.653 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:06.653 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:17:06.653 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:17:06.653 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:17:06.653 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:17:06.653 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:17:06.653 Initialization complete. Launching workers. 00:17:06.653 ======================================================== 00:17:06.653 Latency(us) 00:17:06.653 Device Information : IOPS MiB/s Average min max 00:17:06.653 PCIE (0000:00:10.0) NSID 1 from core 0: 5431.44 21.22 2943.67 950.29 8957.69 00:17:06.653 PCIE (0000:00:11.0) NSID 1 from core 0: 5431.44 21.22 2945.50 965.65 9631.74 00:17:06.653 PCIE (0000:00:13.0) NSID 1 from core 0: 5431.44 21.22 2945.37 980.49 9628.20 00:17:06.653 PCIE (0000:00:12.0) NSID 1 from core 0: 5431.44 21.22 2945.87 953.12 8186.20 00:17:06.653 PCIE (0000:00:12.0) NSID 2 from core 0: 5431.44 21.22 2946.09 984.80 8503.55 00:17:06.653 PCIE (0000:00:12.0) NSID 3 from core 0: 5436.77 21.24 2943.30 984.83 8242.22 00:17:06.653 ======================================================== 00:17:06.653 Total : 32593.98 127.32 2944.97 950.29 9631.74 00:17:06.653 00:17:06.912 Initializing NVMe Controllers 00:17:06.912 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:06.912 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:06.912 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:06.912 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:06.912 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:17:06.912 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:17:06.912 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:17:06.912 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:17:06.912 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:17:06.912 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:17:06.912 Initialization complete. Launching workers. 00:17:06.912 ======================================================== 00:17:06.912 Latency(us) 00:17:06.912 Device Information : IOPS MiB/s Average min max 00:17:06.912 PCIE (0000:00:10.0) NSID 1 from core 1: 5050.38 19.73 3165.60 1014.86 6102.74 00:17:06.912 PCIE (0000:00:11.0) NSID 1 from core 1: 5050.38 19.73 3167.52 1075.32 5922.97 00:17:06.912 PCIE (0000:00:13.0) NSID 1 from core 1: 5050.38 19.73 3167.45 1059.72 5863.43 00:17:06.912 PCIE (0000:00:12.0) NSID 1 from core 1: 5050.38 19.73 3167.37 1052.36 5728.84 00:17:06.912 PCIE (0000:00:12.0) NSID 2 from core 1: 5050.38 19.73 3167.26 1063.81 5698.17 00:17:06.912 PCIE (0000:00:12.0) NSID 3 from core 1: 5050.38 19.73 3167.20 1050.50 6170.98 00:17:06.912 ======================================================== 00:17:06.912 Total : 30302.30 118.37 3167.07 1014.86 6170.98 00:17:06.912 00:17:08.824 Initializing NVMe Controllers 00:17:08.824 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:08.824 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:08.824 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:08.824 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:08.825 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:17:08.825 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:17:08.825 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:17:08.825 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:17:08.825 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:17:08.825 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:17:08.825 Initialization complete. Launching workers. 00:17:08.825 ======================================================== 00:17:08.825 Latency(us) 00:17:08.825 Device Information : IOPS MiB/s Average min max 00:17:08.825 PCIE (0000:00:10.0) NSID 1 from core 2: 3088.92 12.07 5177.98 1088.34 13330.37 00:17:08.825 PCIE (0000:00:11.0) NSID 1 from core 2: 3088.92 12.07 5178.84 1124.96 13135.34 00:17:08.825 PCIE (0000:00:13.0) NSID 1 from core 2: 3088.92 12.07 5179.23 1119.91 12815.31 00:17:08.825 PCIE (0000:00:12.0) NSID 1 from core 2: 3088.92 12.07 5179.10 1125.52 12475.42 00:17:08.825 PCIE (0000:00:12.0) NSID 2 from core 2: 3088.92 12.07 5178.99 1127.84 13126.39 00:17:08.825 PCIE (0000:00:12.0) NSID 3 from core 2: 3088.92 12.07 5178.64 1001.18 13186.78 00:17:08.825 ======================================================== 00:17:08.825 Total : 18533.52 72.40 5178.80 1001.18 13330.37 00:17:08.825 00:17:09.092 ************************************ 00:17:09.092 END TEST nvme_multi_secondary 00:17:09.092 ************************************ 00:17:09.092 22:58:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65497 00:17:09.092 22:58:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65498 00:17:09.092 00:17:09.092 real 0m11.182s 00:17:09.092 user 0m18.648s 00:17:09.092 sys 0m1.081s 00:17:09.092 22:58:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.092 22:58:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:17:09.092 22:58:36 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:17:09.092 22:58:36 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:17:09.092 22:58:36 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64424 ]] 00:17:09.092 22:58:36 nvme -- common/autotest_common.sh@1094 -- # kill 64424 00:17:09.092 22:58:36 nvme -- common/autotest_common.sh@1095 -- # wait 64424 00:17:09.092 [2024-12-09 22:58:36.235681] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.236101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.236167] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.236204] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.241247] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.241328] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.241360] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.241396] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.246244] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.246296] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.246318] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.246340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.250051] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.250258] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.250285] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.092 [2024-12-09 22:58:36.250308] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65369) is not found. Dropping the request. 00:17:09.361 22:58:36 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:17:09.361 22:58:36 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:17:09.361 22:58:36 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:17:09.361 22:58:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:09.361 22:58:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.361 22:58:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:09.361 ************************************ 00:17:09.361 START TEST bdev_nvme_reset_stuck_adm_cmd 00:17:09.361 ************************************ 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:17:09.361 * Looking for test storage... 00:17:09.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:17:09.361 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:09.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.362 --rc genhtml_branch_coverage=1 00:17:09.362 --rc genhtml_function_coverage=1 00:17:09.362 --rc genhtml_legend=1 00:17:09.362 --rc geninfo_all_blocks=1 00:17:09.362 --rc geninfo_unexecuted_blocks=1 00:17:09.362 00:17:09.362 ' 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:09.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.362 --rc genhtml_branch_coverage=1 00:17:09.362 --rc genhtml_function_coverage=1 00:17:09.362 --rc genhtml_legend=1 00:17:09.362 --rc geninfo_all_blocks=1 00:17:09.362 --rc geninfo_unexecuted_blocks=1 00:17:09.362 00:17:09.362 ' 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:09.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.362 --rc genhtml_branch_coverage=1 00:17:09.362 --rc genhtml_function_coverage=1 00:17:09.362 --rc genhtml_legend=1 00:17:09.362 --rc geninfo_all_blocks=1 00:17:09.362 --rc geninfo_unexecuted_blocks=1 00:17:09.362 00:17:09.362 ' 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:09.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.362 --rc genhtml_branch_coverage=1 00:17:09.362 --rc genhtml_function_coverage=1 00:17:09.362 --rc genhtml_legend=1 00:17:09.362 --rc geninfo_all_blocks=1 00:17:09.362 --rc geninfo_unexecuted_blocks=1 00:17:09.362 00:17:09.362 ' 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:09.362 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65660 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65660 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65660 ']' 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.634 22:58:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:09.634 [2024-12-09 22:58:36.928138] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:17:09.634 [2024-12-09 22:58:36.929042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65660 ] 00:17:09.909 [2024-12-09 22:58:37.134581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:10.178 [2024-12-09 22:58:37.267730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.178 [2024-12-09 22:58:37.267863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.178 [2024-12-09 22:58:37.268006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.178 [2024-12-09 22:58:37.268043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:11.118 nvme0n1 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_H3tI9.txt 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:11.118 true 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733785118 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65694 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:11.118 22:58:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:13.650 [2024-12-09 22:58:40.419844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:17:13.650 [2024-12-09 22:58:40.420310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:13.650 [2024-12-09 22:58:40.420444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:13.650 [2024-12-09 22:58:40.420582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:13.650 [2024-12-09 22:58:40.422554] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65694 00:17:13.650 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65694 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65694 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_H3tI9.txt 00:17:13.650 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_H3tI9.txt 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65660 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65660 ']' 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65660 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65660 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:13.651 killing process with pid 65660 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65660' 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65660 00:17:13.651 22:58:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65660 00:17:16.188 22:58:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:17:16.188 22:58:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:17:16.188 00:17:16.188 real 0m6.923s 00:17:16.188 user 0m24.019s 00:17:16.188 sys 0m0.962s 00:17:16.188 22:58:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.188 22:58:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:17:16.188 ************************************ 00:17:16.188 END TEST bdev_nvme_reset_stuck_adm_cmd 00:17:16.188 ************************************ 00:17:16.188 22:58:43 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:17:16.188 22:58:43 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:17:16.188 22:58:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.188 22:58:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.188 22:58:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.188 ************************************ 00:17:16.188 START TEST nvme_fio 00:17:16.188 ************************************ 00:17:16.188 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:17:16.188 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:16.188 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:17:16.188 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:17:16.188 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:16.188 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:17:16.188 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:16.188 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:16.188 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:16.447 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:17:16.447 22:58:43 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:16.447 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:17:16.447 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:17:16.447 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:17:16.447 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:16.447 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:17:16.706 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:16.706 22:58:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:17:16.966 22:58:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:17:16.966 22:58:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:16.966 22:58:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:17:17.225 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:17.225 fio-3.35 00:17:17.225 Starting 1 thread 00:17:20.536 00:17:20.536 test: (groupid=0, jobs=1): err= 0: pid=65848: Mon Dec 9 22:58:47 2024 00:17:20.536 read: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(164MiB/2001msec) 00:17:20.536 slat (nsec): min=3917, max=96295, avg=4887.96, stdev=1463.04 00:17:20.536 clat (usec): min=327, max=14050, avg=3037.97, stdev=602.40 00:17:20.536 lat (usec): min=333, max=14146, avg=3042.86, stdev=603.15 00:17:20.536 clat percentiles (usec): 00:17:20.536 | 1.00th=[ 2278], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:17:20.536 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:17:20.536 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3392], 95.00th=[ 3621], 00:17:20.536 | 99.00th=[ 5669], 99.50th=[ 7570], 99.90th=[ 8848], 99.95th=[10945], 00:17:20.536 | 99.99th=[13698] 00:17:20.536 bw ( KiB/s): min=74291, max=87904, per=99.23%, avg=83342.33, stdev=7838.77, samples=3 00:17:20.536 iops : min=18572, max=21976, avg=20835.33, stdev=1960.12, samples=3 00:17:20.536 write: IOPS=20.9k, BW=81.6MiB/s (85.5MB/s)(163MiB/2001msec); 0 zone resets 00:17:20.536 slat (nsec): min=4011, max=61791, avg=5084.35, stdev=1418.50 00:17:20.536 clat (usec): min=276, max=13844, avg=3047.88, stdev=618.90 00:17:20.536 lat (usec): min=282, max=13856, avg=3052.96, stdev=619.65 00:17:20.536 clat percentiles (usec): 00:17:20.536 | 1.00th=[ 2278], 5.00th=[ 2704], 10.00th=[ 2769], 20.00th=[ 2835], 00:17:20.536 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:17:20.536 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3392], 95.00th=[ 3687], 00:17:20.536 | 99.00th=[ 5735], 99.50th=[ 7767], 99.90th=[ 8979], 99.95th=[11338], 00:17:20.536 | 99.99th=[13435] 00:17:20.536 bw ( KiB/s): min=74299, max=88144, per=99.85%, avg=83398.33, stdev=7882.69, samples=3 00:17:20.536 iops : min=18574, max=22036, avg=20849.33, stdev=1971.11, samples=3 00:17:20.536 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:17:20.536 lat (msec) : 2=0.44%, 4=96.23%, 10=3.22%, 20=0.07% 00:17:20.536 cpu : usr=99.25%, sys=0.05%, ctx=17, majf=0, minf=607 00:17:20.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:20.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:20.536 issued rwts: total=42015,41783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.536 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:20.536 00:17:20.536 Run status group 0 (all jobs): 00:17:20.536 READ: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=164MiB (172MB), run=2001-2001msec 00:17:20.536 WRITE: bw=81.6MiB/s (85.5MB/s), 81.6MiB/s-81.6MiB/s (85.5MB/s-85.5MB/s), io=163MiB (171MB), run=2001-2001msec 00:17:20.797 ----------------------------------------------------- 00:17:20.797 Suppressions used: 00:17:20.797 count bytes template 00:17:20.797 1 32 /usr/src/fio/parse.c 00:17:20.797 1 8 libtcmalloc_minimal.so 00:17:20.797 ----------------------------------------------------- 00:17:20.797 00:17:20.797 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:17:20.797 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:17:20.797 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:17:20.797 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:17:21.056 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:17:21.056 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:17:21.315 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:17:21.315 22:58:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:17:21.315 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:17:21.315 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:21.315 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:21.315 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:21.315 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:21.315 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:17:21.315 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:21.575 22:58:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:17:21.575 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:21.575 fio-3.35 00:17:21.575 Starting 1 thread 00:17:25.799 00:17:25.799 test: (groupid=0, jobs=1): err= 0: pid=65915: Mon Dec 9 22:58:52 2024 00:17:25.799 read: IOPS=20.8k, BW=81.1MiB/s (85.0MB/s)(162MiB/2001msec) 00:17:25.799 slat (usec): min=3, max=130, avg= 5.17, stdev= 2.00 00:17:25.799 clat (usec): min=215, max=10671, avg=3069.89, stdev=736.16 00:17:25.799 lat (usec): min=228, max=10781, avg=3075.05, stdev=737.23 00:17:25.799 clat percentiles (usec): 00:17:25.799 | 1.00th=[ 2057], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2835], 00:17:25.799 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:17:25.799 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3261], 95.00th=[ 3589], 00:17:25.799 | 99.00th=[ 7570], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[ 9241], 00:17:25.799 | 99.99th=[10421] 00:17:25.799 bw ( KiB/s): min=81664, max=84376, per=99.92%, avg=82936.00, stdev=1363.78, samples=3 00:17:25.799 iops : min=20416, max=21094, avg=20734.00, stdev=340.95, samples=3 00:17:25.799 write: IOPS=20.7k, BW=80.7MiB/s (84.7MB/s)(162MiB/2001msec); 0 zone resets 00:17:25.799 slat (usec): min=3, max=101, avg= 5.48, stdev= 1.87 00:17:25.799 clat (usec): min=190, max=10531, avg=3081.20, stdev=737.16 00:17:25.799 lat (usec): min=195, max=10544, avg=3086.68, stdev=738.17 00:17:25.799 clat percentiles (usec): 00:17:25.799 | 1.00th=[ 2114], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2835], 00:17:25.799 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:17:25.799 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3261], 95.00th=[ 3621], 00:17:25.799 | 99.00th=[ 7701], 99.50th=[ 8455], 99.90th=[ 9110], 99.95th=[ 9241], 00:17:25.799 | 99.99th=[10159] 00:17:25.799 bw ( KiB/s): min=81664, max=84272, per=100.00%, avg=82957.33, stdev=1304.13, samples=3 00:17:25.799 iops : min=20416, max=21068, avg=20739.33, stdev=326.03, samples=3 00:17:25.799 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:17:25.799 lat (msec) : 2=0.74%, 4=95.48%, 10=3.70%, 20=0.01% 00:17:25.799 cpu : usr=99.15%, sys=0.10%, ctx=14, majf=0, minf=607 00:17:25.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:25.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.799 issued rwts: total=41524,41358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.799 00:17:25.799 Run status group 0 (all jobs): 00:17:25.799 READ: bw=81.1MiB/s (85.0MB/s), 81.1MiB/s-81.1MiB/s (85.0MB/s-85.0MB/s), io=162MiB (170MB), run=2001-2001msec 00:17:25.799 WRITE: bw=80.7MiB/s (84.7MB/s), 80.7MiB/s-80.7MiB/s (84.7MB/s-84.7MB/s), io=162MiB (169MB), run=2001-2001msec 00:17:25.799 ----------------------------------------------------- 00:17:25.799 Suppressions used: 00:17:25.799 count bytes template 00:17:25.799 1 32 /usr/src/fio/parse.c 00:17:25.799 1 8 libtcmalloc_minimal.so 00:17:25.799 ----------------------------------------------------- 00:17:25.799 00:17:25.799 22:58:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:17:25.799 22:58:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:17:25.799 22:58:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:17:25.799 22:58:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:17:25.799 22:58:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:17:25.799 22:58:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:17:26.058 22:58:53 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:17:26.058 22:58:53 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:17:26.058 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:26.059 22:58:53 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:17:26.317 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:26.317 fio-3.35 00:17:26.317 Starting 1 thread 00:17:30.510 00:17:30.510 test: (groupid=0, jobs=1): err= 0: pid=65981: Mon Dec 9 22:58:56 2024 00:17:30.510 read: IOPS=20.4k, BW=79.7MiB/s (83.6MB/s)(159MiB/2001msec) 00:17:30.510 slat (nsec): min=3796, max=79518, avg=4924.04, stdev=1500.29 00:17:30.510 clat (usec): min=360, max=10985, avg=3122.66, stdev=596.73 00:17:30.510 lat (usec): min=364, max=11049, avg=3127.59, stdev=597.48 00:17:30.510 clat percentiles (usec): 00:17:30.510 | 1.00th=[ 2507], 5.00th=[ 2737], 10.00th=[ 2802], 20.00th=[ 2835], 00:17:30.510 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:17:30.510 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3425], 95.00th=[ 4228], 00:17:30.510 | 99.00th=[ 5800], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 9110], 00:17:30.510 | 99.99th=[10814] 00:17:30.510 bw ( KiB/s): min=74408, max=86464, per=99.60%, avg=81280.00, stdev=6202.72, samples=3 00:17:30.510 iops : min=18602, max=21616, avg=20320.00, stdev=1550.68, samples=3 00:17:30.510 write: IOPS=20.4k, BW=79.5MiB/s (83.4MB/s)(159MiB/2001msec); 0 zone resets 00:17:30.510 slat (nsec): min=3773, max=38924, avg=5177.61, stdev=1427.75 00:17:30.510 clat (usec): min=330, max=10913, avg=3130.55, stdev=616.71 00:17:30.510 lat (usec): min=335, max=10924, avg=3135.73, stdev=617.45 00:17:30.510 clat percentiles (usec): 00:17:30.510 | 1.00th=[ 2540], 5.00th=[ 2737], 10.00th=[ 2802], 20.00th=[ 2868], 00:17:30.510 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3064], 00:17:30.510 | 70.00th=[ 3130], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 4228], 00:17:30.510 | 99.00th=[ 6194], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9503], 00:17:30.510 | 99.99th=[10683] 00:17:30.510 bw ( KiB/s): min=74816, max=86456, per=100.00%, avg=81410.67, stdev=5972.66, samples=3 00:17:30.510 iops : min=18704, max=21614, avg=20352.67, stdev=1493.17, samples=3 00:17:30.510 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:17:30.510 lat (msec) : 2=0.25%, 4=92.42%, 10=7.26%, 20=0.03% 00:17:30.510 cpu : usr=99.20%, sys=0.10%, ctx=3, majf=0, minf=607 00:17:30.510 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:30.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:30.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:30.510 issued rwts: total=40823,40723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:30.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:30.510 00:17:30.510 Run status group 0 (all jobs): 00:17:30.510 READ: bw=79.7MiB/s (83.6MB/s), 79.7MiB/s-79.7MiB/s (83.6MB/s-83.6MB/s), io=159MiB (167MB), run=2001-2001msec 00:17:30.510 WRITE: bw=79.5MiB/s (83.4MB/s), 79.5MiB/s-79.5MiB/s (83.4MB/s-83.4MB/s), io=159MiB (167MB), run=2001-2001msec 00:17:30.510 ----------------------------------------------------- 00:17:30.510 Suppressions used: 00:17:30.510 count bytes template 00:17:30.510 1 32 /usr/src/fio/parse.c 00:17:30.510 1 8 libtcmalloc_minimal.so 00:17:30.510 ----------------------------------------------------- 00:17:30.510 00:17:30.510 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:17:30.510 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:17:30.510 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:17:30.510 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:17:30.510 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:17:30.510 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:17:30.769 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:17:30.769 22:58:57 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:30.769 22:58:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:17:30.769 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:30.769 fio-3.35 00:17:30.769 Starting 1 thread 00:17:36.038 00:17:36.038 test: (groupid=0, jobs=1): err= 0: pid=66049: Mon Dec 9 22:59:02 2024 00:17:36.038 read: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec) 00:17:36.038 slat (nsec): min=3917, max=78213, avg=4797.65, stdev=1305.79 00:17:36.038 clat (usec): min=195, max=10307, avg=2967.02, stdev=564.26 00:17:36.038 lat (usec): min=200, max=10385, avg=2971.82, stdev=564.93 00:17:36.038 clat percentiles (usec): 00:17:36.038 | 1.00th=[ 1876], 5.00th=[ 2606], 10.00th=[ 2704], 20.00th=[ 2769], 00:17:36.038 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:17:36.038 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3458], 00:17:36.038 | 99.00th=[ 5604], 99.50th=[ 7504], 99.90th=[ 8291], 99.95th=[ 8586], 00:17:36.038 | 99.99th=[10028] 00:17:36.038 bw ( KiB/s): min=84128, max=85064, per=98.11%, avg=84477.33, stdev=511.15, samples=3 00:17:36.038 iops : min=21032, max=21266, avg=21119.33, stdev=127.79, samples=3 00:17:36.038 write: IOPS=21.4k, BW=83.5MiB/s (87.5MB/s)(167MiB/2001msec); 0 zone resets 00:17:36.038 slat (nsec): min=4054, max=52412, avg=5064.86, stdev=1302.08 00:17:36.038 clat (usec): min=227, max=10079, avg=2972.77, stdev=569.08 00:17:36.038 lat (usec): min=232, max=10093, avg=2977.84, stdev=569.73 00:17:36.038 clat percentiles (usec): 00:17:36.038 | 1.00th=[ 1909], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2802], 00:17:36.038 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:17:36.038 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3163], 95.00th=[ 3458], 00:17:36.038 | 99.00th=[ 5604], 99.50th=[ 7701], 99.90th=[ 8356], 99.95th=[ 8586], 00:17:36.038 | 99.99th=[ 9765] 00:17:36.038 bw ( KiB/s): min=84024, max=85456, per=98.99%, avg=84605.33, stdev=753.03, samples=3 00:17:36.038 iops : min=21006, max=21364, avg=21151.33, stdev=188.26, samples=3 00:17:36.038 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:17:36.038 lat (msec) : 2=1.17%, 4=96.10%, 10=2.66%, 20=0.01% 00:17:36.038 cpu : usr=99.35%, sys=0.00%, ctx=5, majf=0, minf=605 00:17:36.038 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:36.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.038 issued rwts: total=43075,42755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.038 00:17:36.038 Run status group 0 (all jobs): 00:17:36.038 READ: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (176MB), run=2001-2001msec 00:17:36.038 WRITE: bw=83.5MiB/s (87.5MB/s), 83.5MiB/s-83.5MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:17:36.038 ----------------------------------------------------- 00:17:36.038 Suppressions used: 00:17:36.038 count bytes template 00:17:36.038 1 32 /usr/src/fio/parse.c 00:17:36.038 1 8 libtcmalloc_minimal.so 00:17:36.038 ----------------------------------------------------- 00:17:36.038 00:17:36.038 22:59:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:17:36.038 22:59:03 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:17:36.038 00:17:36.038 real 0m19.621s 00:17:36.038 user 0m14.977s 00:17:36.038 sys 0m4.557s 00:17:36.038 22:59:03 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.038 22:59:03 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:17:36.038 ************************************ 00:17:36.038 END TEST nvme_fio 00:17:36.038 ************************************ 00:17:36.038 00:17:36.038 real 1m36.058s 00:17:36.038 user 3m45.598s 00:17:36.038 sys 0m24.905s 00:17:36.038 22:59:03 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.038 22:59:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:36.038 ************************************ 00:17:36.038 END TEST nvme 00:17:36.038 ************************************ 00:17:36.038 22:59:03 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:17:36.038 22:59:03 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:17:36.038 22:59:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:36.038 22:59:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.038 22:59:03 -- common/autotest_common.sh@10 -- # set +x 00:17:36.038 ************************************ 00:17:36.038 START TEST nvme_scc 00:17:36.038 ************************************ 00:17:36.038 22:59:03 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:17:36.038 * Looking for test storage... 00:17:36.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:36.038 22:59:03 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:36.038 22:59:03 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:36.038 22:59:03 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:36.297 22:59:03 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@345 -- # : 1 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@368 -- # return 0 00:17:36.297 22:59:03 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.297 22:59:03 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:36.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.297 --rc genhtml_branch_coverage=1 00:17:36.297 --rc genhtml_function_coverage=1 00:17:36.297 --rc genhtml_legend=1 00:17:36.297 --rc geninfo_all_blocks=1 00:17:36.297 --rc geninfo_unexecuted_blocks=1 00:17:36.297 00:17:36.297 ' 00:17:36.297 22:59:03 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:36.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.297 --rc genhtml_branch_coverage=1 00:17:36.297 --rc genhtml_function_coverage=1 00:17:36.297 --rc genhtml_legend=1 00:17:36.297 --rc geninfo_all_blocks=1 00:17:36.297 --rc geninfo_unexecuted_blocks=1 00:17:36.297 00:17:36.297 ' 00:17:36.297 22:59:03 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:36.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.297 --rc genhtml_branch_coverage=1 00:17:36.297 --rc genhtml_function_coverage=1 00:17:36.297 --rc genhtml_legend=1 00:17:36.297 --rc geninfo_all_blocks=1 00:17:36.297 --rc geninfo_unexecuted_blocks=1 00:17:36.297 00:17:36.297 ' 00:17:36.297 22:59:03 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:36.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.297 --rc genhtml_branch_coverage=1 00:17:36.297 --rc genhtml_function_coverage=1 00:17:36.297 --rc genhtml_legend=1 00:17:36.297 --rc geninfo_all_blocks=1 00:17:36.297 --rc geninfo_unexecuted_blocks=1 00:17:36.297 00:17:36.297 ' 00:17:36.297 22:59:03 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:36.297 22:59:03 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:36.297 22:59:03 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:17:36.297 22:59:03 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:36.297 22:59:03 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.297 22:59:03 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.297 22:59:03 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.298 22:59:03 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.298 22:59:03 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.298 22:59:03 nvme_scc -- paths/export.sh@5 -- # export PATH 00:17:36.298 22:59:03 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:17:36.298 22:59:03 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:17:36.298 22:59:03 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.298 22:59:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:17:36.298 22:59:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:17:36.298 22:59:03 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:17:36.298 22:59:03 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:36.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:37.123 Waiting for block devices as requested 00:17:37.123 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:37.123 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:37.382 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:37.382 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:42.660 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:42.660 22:59:09 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:17:42.660 22:59:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:17:42.660 22:59:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:17:42.660 22:59:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:42.660 22:59:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:17:42.660 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.661 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:17:42.662 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.663 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.664 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:17:42.665 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.666 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:17:42.667 22:59:09 nvme_scc -- scripts/common.sh@18 -- # local i 00:17:42.667 22:59:09 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:17:42.667 22:59:09 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:42.667 22:59:09 nvme_scc -- scripts/common.sh@27 -- # return 0 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.667 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:17:42.668 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.669 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:17:42.670 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:42.671 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:17:42.672 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:17:42.673 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:17:42.674 22:59:09 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:17:42.942 22:59:09 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:17:42.942 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.942 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.942 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.942 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:17:42.942 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:17:42.942 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:17:42.943 22:59:10 nvme_scc -- scripts/common.sh@18 -- # local i 00:17:42.943 22:59:10 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:17:42.943 22:59:10 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:42.943 22:59:10 nvme_scc -- scripts/common.sh@27 -- # return 0 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.943 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:17:42.944 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:17:42.945 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.946 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.947 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:17:42.948 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:17:42.949 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.950 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.951 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.952 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:17:42.953 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:42.954 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.216 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.217 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.218 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:17:43.219 22:59:10 nvme_scc -- scripts/common.sh@18 -- # local i 00:17:43.219 22:59:10 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:17:43.219 22:59:10 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:43.219 22:59:10 nvme_scc -- scripts/common.sh@27 -- # return 0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:17:43.219 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.220 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:43.221 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:17:43.222 22:59:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:17:43.222 22:59:10 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:17:43.222 22:59:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:17:43.222 22:59:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:17:43.222 22:59:10 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:43.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:44.722 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:44.722 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:44.722 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:44.722 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:44.722 22:59:11 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:17:44.722 22:59:11 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:44.722 22:59:11 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.722 22:59:11 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:17:44.722 ************************************ 00:17:44.722 START TEST nvme_simple_copy 00:17:44.722 ************************************ 00:17:44.722 22:59:11 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:17:44.979 Initializing NVMe Controllers 00:17:44.979 Attaching to 0000:00:10.0 00:17:44.979 Controller supports SCC. Attached to 0000:00:10.0 00:17:44.979 Namespace ID: 1 size: 6GB 00:17:44.979 Initialization complete. 00:17:44.979 00:17:44.979 Controller QEMU NVMe Ctrl (12340 ) 00:17:44.979 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:17:44.979 Namespace Block Size:4096 00:17:44.979 Writing LBAs 0 to 63 with Random Data 00:17:44.979 Copied LBAs from 0 - 63 to the Destination LBA 256 00:17:44.979 LBAs matching Written Data: 64 00:17:44.979 00:17:44.979 real 0m0.334s 00:17:44.979 user 0m0.117s 00:17:44.979 sys 0m0.116s 00:17:44.979 22:59:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.979 ************************************ 00:17:44.979 END TEST nvme_simple_copy 00:17:44.979 ************************************ 00:17:44.979 22:59:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 00:17:45.237 real 0m9.133s 00:17:45.237 user 0m1.620s 00:17:45.237 sys 0m2.387s 00:17:45.237 22:59:12 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.237 22:59:12 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 ************************************ 00:17:45.237 END TEST nvme_scc 00:17:45.237 ************************************ 00:17:45.237 22:59:12 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:17:45.237 22:59:12 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:17:45.237 22:59:12 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:17:45.237 22:59:12 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:17:45.237 22:59:12 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:17:45.237 22:59:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:45.237 22:59:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.237 22:59:12 -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 ************************************ 00:17:45.237 START TEST nvme_fdp 00:17:45.237 ************************************ 00:17:45.237 22:59:12 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:17:45.237 * Looking for test storage... 00:17:45.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:45.237 22:59:12 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:45.238 22:59:12 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:17:45.238 22:59:12 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:45.238 22:59:12 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.238 22:59:12 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:17:45.238 22:59:12 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.238 22:59:12 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:45.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.238 --rc genhtml_branch_coverage=1 00:17:45.238 --rc genhtml_function_coverage=1 00:17:45.238 --rc genhtml_legend=1 00:17:45.238 --rc geninfo_all_blocks=1 00:17:45.238 --rc geninfo_unexecuted_blocks=1 00:17:45.238 00:17:45.238 ' 00:17:45.581 22:59:12 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:45.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.581 --rc genhtml_branch_coverage=1 00:17:45.581 --rc genhtml_function_coverage=1 00:17:45.581 --rc genhtml_legend=1 00:17:45.581 --rc geninfo_all_blocks=1 00:17:45.581 --rc geninfo_unexecuted_blocks=1 00:17:45.581 00:17:45.581 ' 00:17:45.581 22:59:12 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:45.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.581 --rc genhtml_branch_coverage=1 00:17:45.581 --rc genhtml_function_coverage=1 00:17:45.581 --rc genhtml_legend=1 00:17:45.581 --rc geninfo_all_blocks=1 00:17:45.581 --rc geninfo_unexecuted_blocks=1 00:17:45.581 00:17:45.581 ' 00:17:45.581 22:59:12 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:45.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.581 --rc genhtml_branch_coverage=1 00:17:45.581 --rc genhtml_function_coverage=1 00:17:45.581 --rc genhtml_legend=1 00:17:45.581 --rc geninfo_all_blocks=1 00:17:45.581 --rc geninfo_unexecuted_blocks=1 00:17:45.581 00:17:45.581 ' 00:17:45.581 22:59:12 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:45.581 22:59:12 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:17:45.581 22:59:12 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:17:45.581 22:59:12 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:45.581 22:59:12 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.581 22:59:12 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.581 22:59:12 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.582 22:59:12 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.582 22:59:12 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.582 22:59:12 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.582 22:59:12 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.582 22:59:12 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.582 22:59:12 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:17:45.582 22:59:12 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:17:45.582 22:59:12 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:17:45.582 22:59:12 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.582 22:59:12 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:45.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.855 Waiting for block devices as requested 00:17:46.114 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:46.114 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:46.114 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:46.371 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:51.641 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:51.641 22:59:18 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:17:51.641 22:59:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:17:51.641 22:59:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:17:51.641 22:59:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:51.641 22:59:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.641 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:17:51.642 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:51.643 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.644 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.645 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.646 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:17:51.647 22:59:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:17:51.647 22:59:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:17:51.647 22:59:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:51.647 22:59:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:17:51.647 22:59:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:17:51.648 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.649 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:51.650 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.651 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.652 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:17:51.653 22:59:18 nvme_fdp -- scripts/common.sh@18 -- # local i 00:17:51.653 22:59:18 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:17:51.653 22:59:18 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:51.653 22:59:18 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.653 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.654 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:51.655 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.656 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.657 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.658 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.659 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.972 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.973 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:17:51.974 22:59:18 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.974 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:17:51.975 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.976 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:17:51.977 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:17:51.978 22:59:19 nvme_fdp -- scripts/common.sh@18 -- # local i 00:17:51.978 22:59:19 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:17:51.978 22:59:19 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:17:51.978 22:59:19 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:17:51.978 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:17:51.979 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.980 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:17:51.981 22:59:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:17:51.981 22:59:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:17:51.982 22:59:19 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:17:51.982 22:59:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:17:51.982 22:59:19 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:17:51.982 22:59:19 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:52.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:53.484 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.484 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.484 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.484 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:53.484 22:59:20 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:17:53.484 22:59:20 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:53.484 22:59:20 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.484 22:59:20 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:17:53.484 ************************************ 00:17:53.484 START TEST nvme_flexible_data_placement 00:17:53.484 ************************************ 00:17:53.484 22:59:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:17:53.743 Initializing NVMe Controllers 00:17:53.743 Attaching to 0000:00:13.0 00:17:53.743 Controller supports FDP Attached to 0000:00:13.0 00:17:53.743 Namespace ID: 1 Endurance Group ID: 1 00:17:53.743 Initialization complete. 00:17:53.743 00:17:53.743 ================================== 00:17:53.743 == FDP tests for Namespace: #01 == 00:17:53.743 ================================== 00:17:53.743 00:17:53.743 Get Feature: FDP: 00:17:53.743 ================= 00:17:53.743 Enabled: Yes 00:17:53.743 FDP configuration Index: 0 00:17:53.743 00:17:53.743 FDP configurations log page 00:17:53.743 =========================== 00:17:53.743 Number of FDP configurations: 1 00:17:53.743 Version: 0 00:17:53.743 Size: 112 00:17:53.743 FDP Configuration Descriptor: 0 00:17:53.743 Descriptor Size: 96 00:17:53.743 Reclaim Group Identifier format: 2 00:17:53.743 FDP Volatile Write Cache: Not Present 00:17:53.743 FDP Configuration: Valid 00:17:53.743 Vendor Specific Size: 0 00:17:53.743 Number of Reclaim Groups: 2 00:17:53.743 Number of Recalim Unit Handles: 8 00:17:53.743 Max Placement Identifiers: 128 00:17:53.743 Number of Namespaces Suppprted: 256 00:17:53.743 Reclaim unit Nominal Size: 6000000 bytes 00:17:53.743 Estimated Reclaim Unit Time Limit: Not Reported 00:17:53.743 RUH Desc #000: RUH Type: Initially Isolated 00:17:53.743 RUH Desc #001: RUH Type: Initially Isolated 00:17:53.743 RUH Desc #002: RUH Type: Initially Isolated 00:17:53.743 RUH Desc #003: RUH Type: Initially Isolated 00:17:53.743 RUH Desc #004: RUH Type: Initially Isolated 00:17:53.743 RUH Desc #005: RUH Type: Initially Isolated 00:17:53.743 RUH Desc #006: RUH Type: Initially Isolated 00:17:53.743 RUH Desc #007: RUH Type: Initially Isolated 00:17:53.743 00:17:53.743 FDP reclaim unit handle usage log page 00:17:53.743 ====================================== 00:17:53.743 Number of Reclaim Unit Handles: 8 00:17:53.743 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:17:53.743 RUH Usage Desc #001: RUH Attributes: Unused 00:17:53.743 RUH Usage Desc #002: RUH Attributes: Unused 00:17:53.743 RUH Usage Desc #003: RUH Attributes: Unused 00:17:53.743 RUH Usage Desc #004: RUH Attributes: Unused 00:17:53.743 RUH Usage Desc #005: RUH Attributes: Unused 00:17:53.743 RUH Usage Desc #006: RUH Attributes: Unused 00:17:53.743 RUH Usage Desc #007: RUH Attributes: Unused 00:17:53.743 00:17:53.743 FDP statistics log page 00:17:53.743 ======================= 00:17:53.743 Host bytes with metadata written: 929030144 00:17:53.743 Media bytes with metadata written: 929128448 00:17:53.743 Media bytes erased: 0 00:17:53.743 00:17:53.743 FDP Reclaim unit handle status 00:17:53.743 ============================== 00:17:53.743 Number of RUHS descriptors: 2 00:17:53.743 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004a02 00:17:53.743 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:17:53.743 00:17:53.743 FDP write on placement id: 0 success 00:17:53.743 00:17:53.743 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:17:53.743 00:17:53.743 IO mgmt send: RUH update for Placement ID: #0 Success 00:17:53.743 00:17:53.743 Get Feature: FDP Events for Placement handle: #0 00:17:53.743 ======================== 00:17:53.743 Number of FDP Events: 6 00:17:53.743 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:17:53.743 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:17:53.743 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:17:53.743 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:17:53.743 FDP Event: #4 Type: Media Reallocated Enabled: No 00:17:53.743 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:17:53.743 00:17:53.743 FDP events log page 00:17:53.743 =================== 00:17:53.743 Number of FDP events: 1 00:17:53.743 FDP Event #0: 00:17:53.743 Event Type: RU Not Written to Capacity 00:17:53.743 Placement Identifier: Valid 00:17:53.743 NSID: Valid 00:17:53.743 Location: Valid 00:17:53.743 Placement Identifier: 0 00:17:53.743 Event Timestamp: 8 00:17:53.743 Namespace Identifier: 1 00:17:53.743 Reclaim Group Identifier: 0 00:17:53.743 Reclaim Unit Handle Identifier: 0 00:17:53.743 00:17:53.743 FDP test passed 00:17:53.743 00:17:53.743 real 0m0.299s 00:17:53.743 user 0m0.084s 00:17:53.743 sys 0m0.114s 00:17:53.743 22:59:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.743 22:59:20 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:17:53.743 ************************************ 00:17:53.743 END TEST nvme_flexible_data_placement 00:17:53.743 ************************************ 00:17:53.743 ************************************ 00:17:53.743 END TEST nvme_fdp 00:17:53.743 ************************************ 00:17:53.743 00:17:53.743 real 0m8.636s 00:17:53.743 user 0m1.446s 00:17:53.743 sys 0m2.269s 00:17:53.743 22:59:21 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.743 22:59:21 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:17:54.003 22:59:21 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:17:54.003 22:59:21 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:17:54.003 22:59:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:54.003 22:59:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.003 22:59:21 -- common/autotest_common.sh@10 -- # set +x 00:17:54.003 ************************************ 00:17:54.003 START TEST nvme_rpc 00:17:54.003 ************************************ 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:17:54.003 * Looking for test storage... 00:17:54.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.003 22:59:21 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:54.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.003 --rc genhtml_branch_coverage=1 00:17:54.003 --rc genhtml_function_coverage=1 00:17:54.003 --rc genhtml_legend=1 00:17:54.003 --rc geninfo_all_blocks=1 00:17:54.003 --rc geninfo_unexecuted_blocks=1 00:17:54.003 00:17:54.003 ' 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:54.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.003 --rc genhtml_branch_coverage=1 00:17:54.003 --rc genhtml_function_coverage=1 00:17:54.003 --rc genhtml_legend=1 00:17:54.003 --rc geninfo_all_blocks=1 00:17:54.003 --rc geninfo_unexecuted_blocks=1 00:17:54.003 00:17:54.003 ' 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:54.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.003 --rc genhtml_branch_coverage=1 00:17:54.003 --rc genhtml_function_coverage=1 00:17:54.003 --rc genhtml_legend=1 00:17:54.003 --rc geninfo_all_blocks=1 00:17:54.003 --rc geninfo_unexecuted_blocks=1 00:17:54.003 00:17:54.003 ' 00:17:54.003 22:59:21 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:54.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.004 --rc genhtml_branch_coverage=1 00:17:54.004 --rc genhtml_function_coverage=1 00:17:54.004 --rc genhtml_legend=1 00:17:54.004 --rc geninfo_all_blocks=1 00:17:54.004 --rc geninfo_unexecuted_blocks=1 00:17:54.004 00:17:54.004 ' 00:17:54.004 22:59:21 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.004 22:59:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:54.004 22:59:21 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:17:54.263 22:59:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:17:54.263 22:59:21 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:17:54.263 22:59:21 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67458 00:17:54.263 22:59:21 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:17:54.263 22:59:21 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67458 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67458 ']' 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.263 22:59:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.263 [2024-12-09 22:59:21.529525] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:17:54.263 [2024-12-09 22:59:21.529667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67458 ] 00:17:54.522 [2024-12-09 22:59:21.707439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:54.522 [2024-12-09 22:59:21.837649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.522 [2024-12-09 22:59:21.837687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.895 22:59:22 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.895 22:59:22 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:55.895 22:59:22 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:17:55.895 Nvme0n1 00:17:55.895 22:59:23 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:17:55.895 22:59:23 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:17:56.153 request: 00:17:56.153 { 00:17:56.153 "bdev_name": "Nvme0n1", 00:17:56.153 "filename": "non_existing_file", 00:17:56.153 "method": "bdev_nvme_apply_firmware", 00:17:56.153 "req_id": 1 00:17:56.153 } 00:17:56.153 Got JSON-RPC error response 00:17:56.153 response: 00:17:56.153 { 00:17:56.153 "code": -32603, 00:17:56.153 "message": "open file failed." 00:17:56.153 } 00:17:56.153 22:59:23 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:17:56.153 22:59:23 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:17:56.153 22:59:23 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:17:56.411 22:59:23 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:56.411 22:59:23 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67458 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67458 ']' 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67458 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67458 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.411 killing process with pid 67458 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67458' 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67458 00:17:56.411 22:59:23 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67458 00:17:58.989 00:17:58.989 real 0m5.111s 00:17:58.989 user 0m9.436s 00:17:58.989 sys 0m0.859s 00:17:58.989 22:59:26 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.989 22:59:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.989 ************************************ 00:17:58.989 END TEST nvme_rpc 00:17:58.989 ************************************ 00:17:58.989 22:59:26 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:17:58.989 22:59:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:58.989 22:59:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.989 22:59:26 -- common/autotest_common.sh@10 -- # set +x 00:17:58.989 ************************************ 00:17:58.989 START TEST nvme_rpc_timeouts 00:17:58.989 ************************************ 00:17:58.989 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:17:59.248 * Looking for test storage... 00:17:59.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:59.248 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.248 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.248 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.248 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.248 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.249 22:59:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.249 --rc genhtml_branch_coverage=1 00:17:59.249 --rc genhtml_function_coverage=1 00:17:59.249 --rc genhtml_legend=1 00:17:59.249 --rc geninfo_all_blocks=1 00:17:59.249 --rc geninfo_unexecuted_blocks=1 00:17:59.249 00:17:59.249 ' 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.249 --rc genhtml_branch_coverage=1 00:17:59.249 --rc genhtml_function_coverage=1 00:17:59.249 --rc genhtml_legend=1 00:17:59.249 --rc geninfo_all_blocks=1 00:17:59.249 --rc geninfo_unexecuted_blocks=1 00:17:59.249 00:17:59.249 ' 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.249 --rc genhtml_branch_coverage=1 00:17:59.249 --rc genhtml_function_coverage=1 00:17:59.249 --rc genhtml_legend=1 00:17:59.249 --rc geninfo_all_blocks=1 00:17:59.249 --rc geninfo_unexecuted_blocks=1 00:17:59.249 00:17:59.249 ' 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.249 --rc genhtml_branch_coverage=1 00:17:59.249 --rc genhtml_function_coverage=1 00:17:59.249 --rc genhtml_legend=1 00:17:59.249 --rc geninfo_all_blocks=1 00:17:59.249 --rc geninfo_unexecuted_blocks=1 00:17:59.249 00:17:59.249 ' 00:17:59.249 22:59:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.249 22:59:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67539 00:17:59.249 22:59:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67539 00:17:59.249 22:59:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67577 00:17:59.249 22:59:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:17:59.249 22:59:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:17:59.249 22:59:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67577 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67577 ']' 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.249 22:59:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:17:59.508 [2024-12-09 22:59:26.646320] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:17:59.508 [2024-12-09 22:59:26.646489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67577 ] 00:17:59.508 [2024-12-09 22:59:26.822183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:59.768 [2024-12-09 22:59:26.965261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.768 [2024-12-09 22:59:26.965271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.705 22:59:28 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.705 22:59:28 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:18:00.705 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:18:00.705 Checking default timeout settings: 00:18:00.705 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:01.270 Making settings changes with rpc: 00:18:01.270 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:18:01.270 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:18:01.529 Check default vs. modified settings: 00:18:01.529 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:18:01.529 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67539 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67539 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:18:01.787 Setting action_on_timeout is changed as expected. 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67539 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:18:01.787 22:59:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67539 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:18:01.787 Setting timeout_us is changed as expected. 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:18:01.787 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67539 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67539 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:18:01.788 Setting timeout_admin_us is changed as expected. 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67539 /tmp/settings_modified_67539 00:18:01.788 22:59:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67577 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67577 ']' 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67577 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67577 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67577' 00:18:01.788 killing process with pid 67577 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67577 00:18:01.788 22:59:29 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67577 00:18:05.069 RPC TIMEOUT SETTING TEST PASSED. 00:18:05.069 22:59:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:18:05.069 00:18:05.069 real 0m5.423s 00:18:05.069 user 0m10.155s 00:18:05.069 sys 0m0.940s 00:18:05.069 22:59:31 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.069 ************************************ 00:18:05.069 22:59:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:18:05.069 END TEST nvme_rpc_timeouts 00:18:05.069 ************************************ 00:18:05.069 22:59:31 -- spdk/autotest.sh@239 -- # uname -s 00:18:05.069 22:59:31 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:18:05.069 22:59:31 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:18:05.069 22:59:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:05.069 22:59:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.069 22:59:31 -- common/autotest_common.sh@10 -- # set +x 00:18:05.069 ************************************ 00:18:05.069 START TEST sw_hotplug 00:18:05.069 ************************************ 00:18:05.069 22:59:31 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:18:05.069 * Looking for test storage... 00:18:05.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:05.069 22:59:31 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:05.069 22:59:31 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:18:05.069 22:59:31 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:05.069 22:59:32 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:18:05.069 22:59:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.070 22:59:32 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:18:05.070 22:59:32 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.070 22:59:32 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:05.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.070 --rc genhtml_branch_coverage=1 00:18:05.070 --rc genhtml_function_coverage=1 00:18:05.070 --rc genhtml_legend=1 00:18:05.070 --rc geninfo_all_blocks=1 00:18:05.070 --rc geninfo_unexecuted_blocks=1 00:18:05.070 00:18:05.070 ' 00:18:05.070 22:59:32 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:05.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.070 --rc genhtml_branch_coverage=1 00:18:05.070 --rc genhtml_function_coverage=1 00:18:05.070 --rc genhtml_legend=1 00:18:05.070 --rc geninfo_all_blocks=1 00:18:05.070 --rc geninfo_unexecuted_blocks=1 00:18:05.070 00:18:05.070 ' 00:18:05.070 22:59:32 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:05.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.070 --rc genhtml_branch_coverage=1 00:18:05.070 --rc genhtml_function_coverage=1 00:18:05.070 --rc genhtml_legend=1 00:18:05.070 --rc geninfo_all_blocks=1 00:18:05.070 --rc geninfo_unexecuted_blocks=1 00:18:05.070 00:18:05.070 ' 00:18:05.070 22:59:32 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:05.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.070 --rc genhtml_branch_coverage=1 00:18:05.070 --rc genhtml_function_coverage=1 00:18:05.070 --rc genhtml_legend=1 00:18:05.070 --rc geninfo_all_blocks=1 00:18:05.070 --rc geninfo_unexecuted_blocks=1 00:18:05.070 00:18:05.070 ' 00:18:05.070 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:05.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:05.587 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.587 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.587 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.587 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.587 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:18:05.587 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:18:05.587 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:18:05.587 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@233 -- # local class 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@18 -- # local i 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:18:05.587 22:59:32 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:18:05.587 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:18:05.587 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:18:05.587 22:59:32 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:06.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:06.413 Waiting for block devices as requested 00:18:06.413 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:06.671 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:06.671 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:06.931 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:18:12.202 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:18:12.202 22:59:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:18:12.202 22:59:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:12.460 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:18:12.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:12.460 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:18:13.093 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:18:13.093 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.093 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:18:13.352 22:59:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68469 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:18:13.352 22:59:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:18:13.352 22:59:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:18:13.352 22:59:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:18:13.352 22:59:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:18:13.352 22:59:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:18:13.352 22:59:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:18:13.611 Initializing NVMe Controllers 00:18:13.611 Attaching to 0000:00:10.0 00:18:13.611 Attaching to 0000:00:11.0 00:18:13.611 Attached to 0000:00:10.0 00:18:13.611 Attached to 0000:00:11.0 00:18:13.611 Initialization complete. Starting I/O... 00:18:13.611 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:18:13.611 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:18:13.611 00:18:14.986 QEMU NVMe Ctrl (12340 ): 1389 I/Os completed (+1389) 00:18:14.986 QEMU NVMe Ctrl (12341 ): 1459 I/Os completed (+1459) 00:18:14.986 00:18:15.922 QEMU NVMe Ctrl (12340 ): 3238 I/Os completed (+1849) 00:18:15.922 QEMU NVMe Ctrl (12341 ): 3307 I/Os completed (+1848) 00:18:15.922 00:18:16.857 QEMU NVMe Ctrl (12340 ): 5112 I/Os completed (+1874) 00:18:16.857 QEMU NVMe Ctrl (12341 ): 5275 I/Os completed (+1968) 00:18:16.857 00:18:17.792 QEMU NVMe Ctrl (12340 ): 7084 I/Os completed (+1972) 00:18:17.792 QEMU NVMe Ctrl (12341 ): 7254 I/Os completed (+1979) 00:18:17.792 00:18:18.728 QEMU NVMe Ctrl (12340 ): 9124 I/Os completed (+2040) 00:18:18.728 QEMU NVMe Ctrl (12341 ): 9301 I/Os completed (+2047) 00:18:18.728 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:19.665 [2024-12-09 22:59:46.677270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:19.665 Controller removed: QEMU NVMe Ctrl (12340 ) 00:18:19.665 [2024-12-09 22:59:46.679199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.679273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.679297] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.679321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:18:19.665 [2024-12-09 22:59:46.682544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.682705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.682735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.682757] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:19.665 [2024-12-09 22:59:46.719030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:19.665 Controller removed: QEMU NVMe Ctrl (12341 ) 00:18:19.665 [2024-12-09 22:59:46.720969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.721165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.721230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.721367] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:18:19.665 [2024-12-09 22:59:46.727831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.728004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.728045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 [2024-12-09 22:59:46.728064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:19.665 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:18:19.665 EAL: Scan for (pci) bus failed. 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:19.665 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:19.665 22:59:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:19.665 Attaching to 0000:00:10.0 00:18:19.665 Attached to 0000:00:10.0 00:18:19.925 22:59:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:19.925 22:59:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:19.925 22:59:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:19.925 Attaching to 0000:00:11.0 00:18:19.925 Attached to 0000:00:11.0 00:18:20.862 QEMU NVMe Ctrl (12340 ): 1804 I/Os completed (+1804) 00:18:20.862 QEMU NVMe Ctrl (12341 ): 1588 I/Os completed (+1588) 00:18:20.862 00:18:21.800 QEMU NVMe Ctrl (12340 ): 3796 I/Os completed (+1992) 00:18:21.800 QEMU NVMe Ctrl (12341 ): 3594 I/Os completed (+2006) 00:18:21.800 00:18:22.736 QEMU NVMe Ctrl (12340 ): 5797 I/Os completed (+2001) 00:18:22.736 QEMU NVMe Ctrl (12341 ): 5588 I/Os completed (+1994) 00:18:22.736 00:18:23.672 QEMU NVMe Ctrl (12340 ): 7869 I/Os completed (+2072) 00:18:23.672 QEMU NVMe Ctrl (12341 ): 7660 I/Os completed (+2072) 00:18:23.672 00:18:24.608 QEMU NVMe Ctrl (12340 ): 9893 I/Os completed (+2024) 00:18:24.608 QEMU NVMe Ctrl (12341 ): 9684 I/Os completed (+2024) 00:18:24.608 00:18:25.583 QEMU NVMe Ctrl (12340 ): 11922 I/Os completed (+2029) 00:18:25.583 QEMU NVMe Ctrl (12341 ): 11718 I/Os completed (+2034) 00:18:25.583 00:18:26.962 QEMU NVMe Ctrl (12340 ): 13994 I/Os completed (+2072) 00:18:26.962 QEMU NVMe Ctrl (12341 ): 13790 I/Os completed (+2072) 00:18:26.962 00:18:27.898 QEMU NVMe Ctrl (12340 ): 16022 I/Os completed (+2028) 00:18:27.898 QEMU NVMe Ctrl (12341 ): 15822 I/Os completed (+2032) 00:18:27.898 00:18:28.834 QEMU NVMe Ctrl (12340 ): 17992 I/Os completed (+1970) 00:18:28.834 QEMU NVMe Ctrl (12341 ): 17815 I/Os completed (+1993) 00:18:28.834 00:18:29.771 QEMU NVMe Ctrl (12340 ): 19894 I/Os completed (+1902) 00:18:29.771 QEMU NVMe Ctrl (12341 ): 19743 I/Os completed (+1928) 00:18:29.771 00:18:30.709 QEMU NVMe Ctrl (12340 ): 21943 I/Os completed (+2049) 00:18:30.709 QEMU NVMe Ctrl (12341 ): 21793 I/Os completed (+2050) 00:18:30.709 00:18:31.646 QEMU NVMe Ctrl (12340 ): 24035 I/Os completed (+2092) 00:18:31.646 QEMU NVMe Ctrl (12341 ): 23887 I/Os completed (+2094) 00:18:31.646 00:18:31.904 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:18:31.904 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:31.904 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:31.904 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:31.904 [2024-12-09 22:59:59.065689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:31.904 Controller removed: QEMU NVMe Ctrl (12340 ) 00:18:31.904 [2024-12-09 22:59:59.067710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.904 [2024-12-09 22:59:59.067887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.067947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.068095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:18:31.905 [2024-12-09 22:59:59.071311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.071477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.071648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.071702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:18:31.905 EAL: Scan for (pci) bus failed. 00:18:31.905 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:31.905 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:31.905 [2024-12-09 22:59:59.103860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:31.905 Controller removed: QEMU NVMe Ctrl (12341 ) 00:18:31.905 [2024-12-09 22:59:59.105579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.105677] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.105710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.105732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:18:31.905 [2024-12-09 22:59:59.108612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.108658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.108681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 [2024-12-09 22:59:59.108702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:31.905 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:18:31.905 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:31.905 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:31.905 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:31.905 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:32.163 Attaching to 0000:00:10.0 00:18:32.163 Attached to 0000:00:10.0 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:32.163 22:59:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:32.163 Attaching to 0000:00:11.0 00:18:32.163 Attached to 0000:00:11.0 00:18:32.730 QEMU NVMe Ctrl (12340 ): 1066 I/Os completed (+1066) 00:18:32.730 QEMU NVMe Ctrl (12341 ): 854 I/Os completed (+854) 00:18:32.730 00:18:33.668 QEMU NVMe Ctrl (12340 ): 3130 I/Os completed (+2064) 00:18:33.668 QEMU NVMe Ctrl (12341 ): 2921 I/Os completed (+2067) 00:18:33.668 00:18:34.604 QEMU NVMe Ctrl (12340 ): 5026 I/Os completed (+1896) 00:18:34.604 QEMU NVMe Ctrl (12341 ): 4813 I/Os completed (+1892) 00:18:34.604 00:18:35.539 QEMU NVMe Ctrl (12340 ): 6965 I/Os completed (+1939) 00:18:35.539 QEMU NVMe Ctrl (12341 ): 6783 I/Os completed (+1970) 00:18:35.539 00:18:36.917 QEMU NVMe Ctrl (12340 ): 9085 I/Os completed (+2120) 00:18:36.917 QEMU NVMe Ctrl (12341 ): 8903 I/Os completed (+2120) 00:18:36.917 00:18:37.857 QEMU NVMe Ctrl (12340 ): 11201 I/Os completed (+2116) 00:18:37.857 QEMU NVMe Ctrl (12341 ): 11019 I/Os completed (+2116) 00:18:37.857 00:18:38.794 QEMU NVMe Ctrl (12340 ): 13225 I/Os completed (+2024) 00:18:38.794 QEMU NVMe Ctrl (12341 ): 13048 I/Os completed (+2029) 00:18:38.794 00:18:39.726 QEMU NVMe Ctrl (12340 ): 15305 I/Os completed (+2080) 00:18:39.726 QEMU NVMe Ctrl (12341 ): 15128 I/Os completed (+2080) 00:18:39.726 00:18:40.661 QEMU NVMe Ctrl (12340 ): 17461 I/Os completed (+2156) 00:18:40.661 QEMU NVMe Ctrl (12341 ): 17284 I/Os completed (+2156) 00:18:40.661 00:18:41.596 QEMU NVMe Ctrl (12340 ): 19577 I/Os completed (+2116) 00:18:41.596 QEMU NVMe Ctrl (12341 ): 19400 I/Os completed (+2116) 00:18:41.596 00:18:42.964 QEMU NVMe Ctrl (12340 ): 21761 I/Os completed (+2184) 00:18:42.964 QEMU NVMe Ctrl (12341 ): 21584 I/Os completed (+2184) 00:18:42.964 00:18:43.527 QEMU NVMe Ctrl (12340 ): 23937 I/Os completed (+2176) 00:18:43.527 QEMU NVMe Ctrl (12341 ): 23760 I/Os completed (+2176) 00:18:43.527 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:44.461 [2024-12-09 23:00:11.450405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:18:44.461 Controller removed: QEMU NVMe Ctrl (12340 ) 00:18:44.461 [2024-12-09 23:00:11.453262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.453530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.453700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.453776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:18:44.461 [2024-12-09 23:00:11.457940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.458134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.458207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.458329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:18:44.461 [2024-12-09 23:00:11.483147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:18:44.461 Controller removed: QEMU NVMe Ctrl (12341 ) 00:18:44.461 [2024-12-09 23:00:11.485858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.486076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.486235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.486303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:18:44.461 [2024-12-09 23:00:11.490168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.490322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.490467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 [2024-12-09 23:00:11.490539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:18:44.461 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:18:44.461 EAL: Scan for (pci) bus failed. 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:18:44.461 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:18:44.461 Attaching to 0000:00:10.0 00:18:44.461 Attached to 0000:00:10.0 00:18:44.720 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:18:44.720 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:18:44.720 23:00:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:18:44.720 Attaching to 0000:00:11.0 00:18:44.720 Attached to 0000:00:11.0 00:18:44.720 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:18:44.720 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:18:44.720 [2024-12-09 23:00:11.866976] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:18:56.957 23:00:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:18:56.957 23:00:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:18:56.957 23:00:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.18 00:18:56.957 23:00:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.18 00:18:56.957 23:00:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:18:56.957 23:00:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.18 00:18:56.957 23:00:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.18 2 00:18:56.957 remove_attach_helper took 43.18s to complete (handling 2 nvme drive(s)) 23:00:23 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68469 00:19:03.608 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68469) - No such process 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68469 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69007 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:19:03.608 23:00:29 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69007 00:19:03.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.608 23:00:29 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69007 ']' 00:19:03.608 23:00:29 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.608 23:00:29 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.608 23:00:29 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.608 23:00:29 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.608 23:00:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:03.608 [2024-12-09 23:00:30.001233] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:19:03.608 [2024-12-09 23:00:30.001410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69007 ] 00:19:03.608 [2024-12-09 23:00:30.193905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.608 [2024-12-09 23:00:30.342952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:19:04.176 23:00:31 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:19:04.176 23:00:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:10.763 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:10.763 23:00:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.763 23:00:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:10.763 [2024-12-09 23:00:37.561259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:19:10.763 [2024-12-09 23:00:37.564079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.564130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.564158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.764 [2024-12-09 23:00:37.564191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.564204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.564222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.764 [2024-12-09 23:00:37.564238] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.564254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.564266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.764 [2024-12-09 23:00:37.564287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.564299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.564314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.764 23:00:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.764 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:19:10.764 23:00:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:10.764 [2024-12-09 23:00:37.960637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:19:10.764 [2024-12-09 23:00:37.963751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.963805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.963827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.764 [2024-12-09 23:00:37.963855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.963871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.963885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.764 [2024-12-09 23:00:37.963903] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.963915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.963946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.764 [2024-12-09 23:00:37.963961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:10.764 [2024-12-09 23:00:37.963975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.764 [2024-12-09 23:00:37.963988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:11.023 23:00:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.023 23:00:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:11.023 23:00:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:11.023 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:11.289 23:00:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:23.517 23:00:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.517 23:00:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:23.517 23:00:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:23.517 [2024-12-09 23:00:50.640239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:19:23.517 [2024-12-09 23:00:50.643216] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.517 [2024-12-09 23:00:50.643275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.517 [2024-12-09 23:00:50.643293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.517 [2024-12-09 23:00:50.643322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.517 [2024-12-09 23:00:50.643334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.517 [2024-12-09 23:00:50.643350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.517 [2024-12-09 23:00:50.643363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.517 [2024-12-09 23:00:50.643376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.517 [2024-12-09 23:00:50.643388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.517 [2024-12-09 23:00:50.643404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.517 [2024-12-09 23:00:50.643415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.517 [2024-12-09 23:00:50.643430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:23.517 23:00:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.517 23:00:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:23.517 23:00:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:19:23.517 23:00:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:23.777 [2024-12-09 23:00:51.039612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:19:23.777 [2024-12-09 23:00:51.042288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.777 [2024-12-09 23:00:51.042338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.777 [2024-12-09 23:00:51.042362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.777 [2024-12-09 23:00:51.042387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.777 [2024-12-09 23:00:51.042410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.777 [2024-12-09 23:00:51.042422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.777 [2024-12-09 23:00:51.042439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.777 [2024-12-09 23:00:51.042463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.777 [2024-12-09 23:00:51.042479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.777 [2024-12-09 23:00:51.042494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:23.777 [2024-12-09 23:00:51.042508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.777 [2024-12-09 23:00:51.042520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:24.036 23:00:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.036 23:00:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:24.036 23:00:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:24.036 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:24.295 23:00:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:36.509 23:01:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.509 23:01:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:36.509 23:01:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:36.509 [2024-12-09 23:01:03.719196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:36.509 [2024-12-09 23:01:03.721780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:36.509 [2024-12-09 23:01:03.721833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.509 [2024-12-09 23:01:03.721851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.509 [2024-12-09 23:01:03.721879] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:36.509 [2024-12-09 23:01:03.721891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.509 [2024-12-09 23:01:03.721909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.509 [2024-12-09 23:01:03.721922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:36.509 [2024-12-09 23:01:03.721937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.509 [2024-12-09 23:01:03.721949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.509 [2024-12-09 23:01:03.721965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:36.509 [2024-12-09 23:01:03.721976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:36.509 [2024-12-09 23:01:03.721991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:36.509 23:01:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.509 23:01:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:36.509 23:01:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:19:36.509 23:01:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:37.078 [2024-12-09 23:01:04.118560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:19:37.078 [2024-12-09 23:01:04.121316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:37.078 [2024-12-09 23:01:04.121365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 23:01:04.121390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 23:01:04.121414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:37.078 [2024-12-09 23:01:04.121431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 23:01:04.121444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 23:01:04.121475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:37.078 [2024-12-09 23:01:04.121488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 23:01:04.121511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 [2024-12-09 23:01:04.121526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:37.078 [2024-12-09 23:01:04.121543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.078 [2024-12-09 23:01:04.121555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:37.078 23:01:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.078 23:01:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:37.078 23:01:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:37.078 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:37.336 23:01:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:19:49.536 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:19:49.536 23:01:16 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:19:49.536 23:01:16 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:56.163 23:01:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.163 23:01:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 [2024-12-09 23:01:22.804847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:19:56.163 [2024-12-09 23:01:22.807582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:22.807642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:22.807661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 [2024-12-09 23:01:22.807691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:22.807705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:22.807721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 [2024-12-09 23:01:22.807736] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:22.807752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:22.807766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 [2024-12-09 23:01:22.807783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:22.807796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:22.807815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 23:01:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:19:56.163 23:01:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:19:56.163 [2024-12-09 23:01:23.204225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:19:56.163 [2024-12-09 23:01:23.208118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:23.208189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:23.208221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 [2024-12-09 23:01:23.208257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:23.208281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:23.208303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 [2024-12-09 23:01:23.208328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:23.208348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:23.208371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 [2024-12-09 23:01:23.208392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:56.163 [2024-12-09 23:01:23.208414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.163 [2024-12-09 23:01:23.208431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:19:56.163 23:01:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.163 23:01:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:19:56.163 23:01:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:19:56.163 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:56.421 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:56.680 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:56.680 23:01:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:20:08.899 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:20:08.899 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:08.900 23:01:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.900 23:01:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:08.900 23:01:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:08.900 [2024-12-09 23:01:35.883775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:20:08.900 [2024-12-09 23:01:35.886682] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:08.900 [2024-12-09 23:01:35.886733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.900 [2024-12-09 23:01:35.886751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.900 [2024-12-09 23:01:35.886781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:08.900 [2024-12-09 23:01:35.886795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.900 [2024-12-09 23:01:35.886810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.900 [2024-12-09 23:01:35.886826] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:08.900 [2024-12-09 23:01:35.886841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.900 [2024-12-09 23:01:35.886853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.900 [2024-12-09 23:01:35.886872] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:08.900 [2024-12-09 23:01:35.886884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.900 [2024-12-09 23:01:35.886902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:08.900 23:01:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.900 23:01:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:08.900 23:01:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:20:08.900 23:01:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:20:09.157 [2024-12-09 23:01:36.382960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:20:09.157 [2024-12-09 23:01:36.388165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:09.157 [2024-12-09 23:01:36.388223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.157 [2024-12-09 23:01:36.388244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.157 [2024-12-09 23:01:36.388269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:09.157 [2024-12-09 23:01:36.388289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.157 [2024-12-09 23:01:36.388301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.157 [2024-12-09 23:01:36.388317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:09.157 [2024-12-09 23:01:36.388329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.157 [2024-12-09 23:01:36.388346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.157 [2024-12-09 23:01:36.388361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:09.157 [2024-12-09 23:01:36.388375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:09.157 [2024-12-09 23:01:36.388387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:09.157 [2024-12-09 23:01:36.388407] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:20:09.157 [2024-12-09 23:01:36.388430] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:20:09.157 [2024-12-09 23:01:36.388444] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:20:09.157 [2024-12-09 23:01:36.388474] bdev_nvme.c:5588:aer_cb: *WARNING*: AER request execute failed 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:09.157 23:01:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.157 23:01:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:09.157 23:01:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:20:09.157 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:20:09.415 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:09.415 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:09.415 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:20:09.415 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:20:09.416 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:09.416 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:09.416 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:09.416 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:20:09.675 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:20:09.675 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:09.675 23:01:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:20:21.884 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:20:21.884 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:20:21.884 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:21.885 23:01:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.885 23:01:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:21.885 23:01:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:21.885 23:01:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:21.885 23:01:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.885 23:01:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:21.885 [2024-12-09 23:01:48.962733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:20:21.885 [2024-12-09 23:01:48.965313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:21.885 [2024-12-09 23:01:48.965366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.885 [2024-12-09 23:01:48.965384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.885 [2024-12-09 23:01:48.965420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:21.885 [2024-12-09 23:01:48.965433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.885 [2024-12-09 23:01:48.965460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.885 [2024-12-09 23:01:48.965475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:21.885 [2024-12-09 23:01:48.965490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.885 [2024-12-09 23:01:48.965504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.885 [2024-12-09 23:01:48.965520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:21.885 [2024-12-09 23:01:48.965531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:21.885 [2024-12-09 23:01:48.965546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:21.885 23:01:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.885 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:20:21.885 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:20:22.144 [2024-12-09 23:01:49.362100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:20:22.144 [2024-12-09 23:01:49.364089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:22.144 [2024-12-09 23:01:49.364138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.144 [2024-12-09 23:01:49.364158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.144 [2024-12-09 23:01:49.364184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:22.144 [2024-12-09 23:01:49.364200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.144 [2024-12-09 23:01:49.364213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.144 [2024-12-09 23:01:49.364233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:22.144 [2024-12-09 23:01:49.364244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.144 [2024-12-09 23:01:49.364259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.144 [2024-12-09 23:01:49.364273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:22.144 [2024-12-09 23:01:49.364288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.144 [2024-12-09 23:01:49.364300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:22.402 23:01:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.402 23:01:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:22.402 23:01:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:22.402 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:20:22.660 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:20:22.660 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:22.660 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:22.660 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:22.660 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:20:22.661 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:20:22.661 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:22.661 23:01:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:20:34.869 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:20:34.869 23:02:01 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69007 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69007 ']' 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69007 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69007 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:34.869 killing process with pid 69007 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69007' 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69007 00:20:34.869 23:02:01 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69007 00:20:37.398 23:02:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:37.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.228 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.228 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:38.493 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.493 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:38.493 00:20:38.493 real 2m34.000s 00:20:38.493 user 1m51.563s 00:20:38.493 sys 0m22.760s 00:20:38.493 23:02:05 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.493 ************************************ 00:20:38.493 END TEST sw_hotplug 00:20:38.493 ************************************ 00:20:38.493 23:02:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:38.754 23:02:05 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:20:38.754 23:02:05 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:20:38.754 23:02:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:38.754 23:02:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.754 23:02:05 -- common/autotest_common.sh@10 -- # set +x 00:20:38.754 ************************************ 00:20:38.754 START TEST nvme_xnvme 00:20:38.754 ************************************ 00:20:38.754 23:02:05 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:20:38.755 * Looking for test storage... 00:20:38.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:20:38.755 23:02:06 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:38.755 23:02:06 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:20:38.755 23:02:06 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:39.015 23:02:06 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:39.015 23:02:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.016 23:02:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:39.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.016 --rc genhtml_branch_coverage=1 00:20:39.016 --rc genhtml_function_coverage=1 00:20:39.016 --rc genhtml_legend=1 00:20:39.016 --rc geninfo_all_blocks=1 00:20:39.016 --rc geninfo_unexecuted_blocks=1 00:20:39.016 00:20:39.016 ' 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:39.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.016 --rc genhtml_branch_coverage=1 00:20:39.016 --rc genhtml_function_coverage=1 00:20:39.016 --rc genhtml_legend=1 00:20:39.016 --rc geninfo_all_blocks=1 00:20:39.016 --rc geninfo_unexecuted_blocks=1 00:20:39.016 00:20:39.016 ' 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:39.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.016 --rc genhtml_branch_coverage=1 00:20:39.016 --rc genhtml_function_coverage=1 00:20:39.016 --rc genhtml_legend=1 00:20:39.016 --rc geninfo_all_blocks=1 00:20:39.016 --rc geninfo_unexecuted_blocks=1 00:20:39.016 00:20:39.016 ' 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:39.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.016 --rc genhtml_branch_coverage=1 00:20:39.016 --rc genhtml_function_coverage=1 00:20:39.016 --rc genhtml_legend=1 00:20:39.016 --rc geninfo_all_blocks=1 00:20:39.016 --rc geninfo_unexecuted_blocks=1 00:20:39.016 00:20:39.016 ' 00:20:39.016 23:02:06 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:20:39.016 23:02:06 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:20:39.016 23:02:06 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:20:39.016 23:02:06 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:20:39.017 #define SPDK_CONFIG_H 00:20:39.017 #define SPDK_CONFIG_AIO_FSDEV 1 00:20:39.017 #define SPDK_CONFIG_APPS 1 00:20:39.017 #define SPDK_CONFIG_ARCH native 00:20:39.017 #define SPDK_CONFIG_ASAN 1 00:20:39.017 #undef SPDK_CONFIG_AVAHI 00:20:39.017 #undef SPDK_CONFIG_CET 00:20:39.017 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:20:39.017 #define SPDK_CONFIG_COVERAGE 1 00:20:39.017 #define SPDK_CONFIG_CROSS_PREFIX 00:20:39.017 #undef SPDK_CONFIG_CRYPTO 00:20:39.017 #undef SPDK_CONFIG_CRYPTO_MLX5 00:20:39.017 #undef SPDK_CONFIG_CUSTOMOCF 00:20:39.017 #undef SPDK_CONFIG_DAOS 00:20:39.017 #define SPDK_CONFIG_DAOS_DIR 00:20:39.017 #define SPDK_CONFIG_DEBUG 1 00:20:39.017 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:20:39.017 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:39.017 #define SPDK_CONFIG_DPDK_INC_DIR 00:20:39.017 #define SPDK_CONFIG_DPDK_LIB_DIR 00:20:39.017 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:20:39.017 #undef SPDK_CONFIG_DPDK_UADK 00:20:39.017 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:39.017 #define SPDK_CONFIG_EXAMPLES 1 00:20:39.017 #undef SPDK_CONFIG_FC 00:20:39.017 #define SPDK_CONFIG_FC_PATH 00:20:39.017 #define SPDK_CONFIG_FIO_PLUGIN 1 00:20:39.017 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:20:39.017 #define SPDK_CONFIG_FSDEV 1 00:20:39.017 #undef SPDK_CONFIG_FUSE 00:20:39.017 #undef SPDK_CONFIG_FUZZER 00:20:39.017 #define SPDK_CONFIG_FUZZER_LIB 00:20:39.017 #undef SPDK_CONFIG_GOLANG 00:20:39.017 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:20:39.017 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:20:39.017 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:20:39.017 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:20:39.017 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:20:39.017 #undef SPDK_CONFIG_HAVE_LIBBSD 00:20:39.017 #undef SPDK_CONFIG_HAVE_LZ4 00:20:39.017 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:20:39.017 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:20:39.017 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:20:39.017 #define SPDK_CONFIG_IDXD 1 00:20:39.017 #define SPDK_CONFIG_IDXD_KERNEL 1 00:20:39.017 #undef SPDK_CONFIG_IPSEC_MB 00:20:39.017 #define SPDK_CONFIG_IPSEC_MB_DIR 00:20:39.017 #define SPDK_CONFIG_ISAL 1 00:20:39.017 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:20:39.017 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:20:39.017 #define SPDK_CONFIG_LIBDIR 00:20:39.017 #undef SPDK_CONFIG_LTO 00:20:39.017 #define SPDK_CONFIG_MAX_LCORES 128 00:20:39.017 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:20:39.017 #define SPDK_CONFIG_NVME_CUSE 1 00:20:39.017 #undef SPDK_CONFIG_OCF 00:20:39.017 #define SPDK_CONFIG_OCF_PATH 00:20:39.017 #define SPDK_CONFIG_OPENSSL_PATH 00:20:39.017 #undef SPDK_CONFIG_PGO_CAPTURE 00:20:39.017 #define SPDK_CONFIG_PGO_DIR 00:20:39.017 #undef SPDK_CONFIG_PGO_USE 00:20:39.017 #define SPDK_CONFIG_PREFIX /usr/local 00:20:39.017 #undef SPDK_CONFIG_RAID5F 00:20:39.017 #undef SPDK_CONFIG_RBD 00:20:39.017 #define SPDK_CONFIG_RDMA 1 00:20:39.017 #define SPDK_CONFIG_RDMA_PROV verbs 00:20:39.017 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:20:39.017 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:20:39.017 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:20:39.017 #define SPDK_CONFIG_SHARED 1 00:20:39.017 #undef SPDK_CONFIG_SMA 00:20:39.017 #define SPDK_CONFIG_TESTS 1 00:20:39.017 #undef SPDK_CONFIG_TSAN 00:20:39.017 #define SPDK_CONFIG_UBLK 1 00:20:39.017 #define SPDK_CONFIG_UBSAN 1 00:20:39.017 #undef SPDK_CONFIG_UNIT_TESTS 00:20:39.017 #undef SPDK_CONFIG_URING 00:20:39.017 #define SPDK_CONFIG_URING_PATH 00:20:39.017 #undef SPDK_CONFIG_URING_ZNS 00:20:39.017 #undef SPDK_CONFIG_USDT 00:20:39.017 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:20:39.017 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:20:39.017 #undef SPDK_CONFIG_VFIO_USER 00:20:39.017 #define SPDK_CONFIG_VFIO_USER_DIR 00:20:39.017 #define SPDK_CONFIG_VHOST 1 00:20:39.017 #define SPDK_CONFIG_VIRTIO 1 00:20:39.017 #undef SPDK_CONFIG_VTUNE 00:20:39.017 #define SPDK_CONFIG_VTUNE_DIR 00:20:39.017 #define SPDK_CONFIG_WERROR 1 00:20:39.017 #define SPDK_CONFIG_WPDK_DIR 00:20:39.017 #define SPDK_CONFIG_XNVME 1 00:20:39.017 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:20:39.017 23:02:06 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.017 23:02:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.017 23:02:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.017 23:02:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.017 23:02:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.017 23:02:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.017 23:02:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.017 23:02:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.017 23:02:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:20:39.017 23:02:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@68 -- # uname -s 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:20:39.017 23:02:06 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:20:39.017 23:02:06 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:20:39.018 23:02:06 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70354 ]] 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70354 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3Fy28O 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.3Fy28O/tests/xnvme /tmp/spdk.3Fy28O 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974355968 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593989120 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974355968 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593989120 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:20:39.019 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=92421697536 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=7281082368 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:20:39.020 * Looking for test storage... 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974355968 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:20:39.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:20:39.020 23:02:06 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:39.279 23:02:06 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:20:39.279 23:02:06 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:39.279 23:02:06 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.279 23:02:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:20:39.280 23:02:06 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.280 23:02:06 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.280 --rc genhtml_branch_coverage=1 00:20:39.280 --rc genhtml_function_coverage=1 00:20:39.280 --rc genhtml_legend=1 00:20:39.280 --rc geninfo_all_blocks=1 00:20:39.280 --rc geninfo_unexecuted_blocks=1 00:20:39.280 00:20:39.280 ' 00:20:39.280 23:02:06 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.280 --rc genhtml_branch_coverage=1 00:20:39.280 --rc genhtml_function_coverage=1 00:20:39.280 --rc genhtml_legend=1 00:20:39.280 --rc geninfo_all_blocks=1 00:20:39.280 --rc geninfo_unexecuted_blocks=1 00:20:39.280 00:20:39.280 ' 00:20:39.280 23:02:06 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.280 --rc genhtml_branch_coverage=1 00:20:39.280 --rc genhtml_function_coverage=1 00:20:39.280 --rc genhtml_legend=1 00:20:39.280 --rc geninfo_all_blocks=1 00:20:39.280 --rc geninfo_unexecuted_blocks=1 00:20:39.280 00:20:39.280 ' 00:20:39.280 23:02:06 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:39.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.280 --rc genhtml_branch_coverage=1 00:20:39.280 --rc genhtml_function_coverage=1 00:20:39.280 --rc genhtml_legend=1 00:20:39.280 --rc geninfo_all_blocks=1 00:20:39.280 --rc geninfo_unexecuted_blocks=1 00:20:39.280 00:20:39.280 ' 00:20:39.280 23:02:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.280 23:02:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.280 23:02:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.280 23:02:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.280 23:02:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.280 23:02:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:20:39.280 23:02:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:20:39.280 23:02:06 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:39.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:40.107 Waiting for block devices as requested 00:20:40.108 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.366 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.366 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.366 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.648 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:45.648 23:02:12 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:20:45.951 23:02:13 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:20:46.240 23:02:13 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:20:46.240 23:02:13 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:20:46.240 23:02:13 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:20:46.240 23:02:13 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:20:46.240 23:02:13 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:20:46.240 23:02:13 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:20:46.240 No valid GPT data, bailing 00:20:46.240 23:02:13 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:46.499 23:02:13 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:20:46.499 23:02:13 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:20:46.499 23:02:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:20:46.499 23:02:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:46.499 23:02:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.499 23:02:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:46.499 ************************************ 00:20:46.499 START TEST xnvme_rpc 00:20:46.499 ************************************ 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70759 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70759 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70759 ']' 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:46.499 23:02:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:46.499 [2024-12-09 23:02:13.712252] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:20:46.499 [2024-12-09 23:02:13.712407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70759 ] 00:20:46.758 [2024-12-09 23:02:13.896819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.758 [2024-12-09 23:02:14.023888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.695 23:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.695 23:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:47.695 23:02:14 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:20:47.695 23:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.695 23:02:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.695 xnvme_bdev 00:20:47.695 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.695 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:20:47.695 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.695 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:20:47.695 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.695 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70759 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70759 ']' 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70759 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70759 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.952 killing process with pid 70759 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70759' 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70759 00:20:47.952 23:02:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70759 00:20:50.507 00:20:50.507 real 0m4.201s 00:20:50.507 user 0m4.153s 00:20:50.507 sys 0m0.643s 00:20:50.507 23:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.507 23:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.507 ************************************ 00:20:50.507 END TEST xnvme_rpc 00:20:50.507 ************************************ 00:20:50.765 23:02:17 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:20:50.765 23:02:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.765 23:02:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.765 23:02:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:50.765 ************************************ 00:20:50.765 START TEST xnvme_bdevperf 00:20:50.765 ************************************ 00:20:50.765 23:02:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:20:50.765 23:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:20:50.765 23:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:20:50.765 23:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:50.765 23:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:20:50.765 23:02:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:50.766 23:02:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:50.766 23:02:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:50.766 { 00:20:50.766 "subsystems": [ 00:20:50.766 { 00:20:50.766 "subsystem": "bdev", 00:20:50.766 "config": [ 00:20:50.766 { 00:20:50.766 "params": { 00:20:50.766 "io_mechanism": "libaio", 00:20:50.766 "conserve_cpu": false, 00:20:50.766 "filename": "/dev/nvme0n1", 00:20:50.766 "name": "xnvme_bdev" 00:20:50.766 }, 00:20:50.766 "method": "bdev_xnvme_create" 00:20:50.766 }, 00:20:50.766 { 00:20:50.766 "method": "bdev_wait_for_examine" 00:20:50.766 } 00:20:50.766 ] 00:20:50.766 } 00:20:50.766 ] 00:20:50.766 } 00:20:50.766 [2024-12-09 23:02:17.969406] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:20:50.766 [2024-12-09 23:02:17.969599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70844 ] 00:20:51.025 [2024-12-09 23:02:18.155558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.025 [2024-12-09 23:02:18.283472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.603 Running I/O for 5 seconds... 00:20:53.520 36483.00 IOPS, 142.51 MiB/s [2024-12-09T23:02:21.787Z] 36616.50 IOPS, 143.03 MiB/s [2024-12-09T23:02:22.737Z] 36040.00 IOPS, 140.78 MiB/s [2024-12-09T23:02:24.110Z] 36912.25 IOPS, 144.19 MiB/s 00:20:56.774 Latency(us) 00:20:56.774 [2024-12-09T23:02:24.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.774 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:20:56.774 xnvme_bdev : 5.00 37547.99 146.67 0.00 0.00 1700.70 96.64 41058.70 00:20:56.774 [2024-12-09T23:02:24.110Z] =================================================================================================================== 00:20:56.774 [2024-12-09T23:02:24.110Z] Total : 37547.99 146.67 0.00 0.00 1700.70 96.64 41058.70 00:20:57.707 23:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:20:57.707 23:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:20:57.707 23:02:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:20:57.707 23:02:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:20:57.707 23:02:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:57.707 { 00:20:57.707 "subsystems": [ 00:20:57.707 { 00:20:57.707 "subsystem": "bdev", 00:20:57.707 "config": [ 00:20:57.707 { 00:20:57.707 "params": { 00:20:57.707 "io_mechanism": "libaio", 00:20:57.707 "conserve_cpu": false, 00:20:57.707 "filename": "/dev/nvme0n1", 00:20:57.707 "name": "xnvme_bdev" 00:20:57.707 }, 00:20:57.707 "method": "bdev_xnvme_create" 00:20:57.707 }, 00:20:57.707 { 00:20:57.707 "method": "bdev_wait_for_examine" 00:20:57.707 } 00:20:57.707 ] 00:20:57.707 } 00:20:57.707 ] 00:20:57.707 } 00:20:57.707 [2024-12-09 23:02:25.025599] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:20:57.707 [2024-12-09 23:02:25.025746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70925 ] 00:20:57.966 [2024-12-09 23:02:25.225340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.224 [2024-12-09 23:02:25.361138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.482 Running I/O for 5 seconds... 00:21:00.793 37215.00 IOPS, 145.37 MiB/s [2024-12-09T23:02:29.064Z] 36017.50 IOPS, 140.69 MiB/s [2024-12-09T23:02:30.002Z] 37516.33 IOPS, 146.55 MiB/s [2024-12-09T23:02:30.950Z] 36134.75 IOPS, 141.15 MiB/s 00:21:03.614 Latency(us) 00:21:03.614 [2024-12-09T23:02:30.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.614 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:03.614 xnvme_bdev : 5.00 36179.83 141.33 0.00 0.00 1764.49 85.13 6895.76 00:21:03.614 [2024-12-09T23:02:30.950Z] =================================================================================================================== 00:21:03.614 [2024-12-09T23:02:30.950Z] Total : 36179.83 141.33 0.00 0.00 1764.49 85.13 6895.76 00:21:04.988 00:21:04.988 real 0m14.151s 00:21:04.988 user 0m5.556s 00:21:04.988 sys 0m5.923s 00:21:04.988 23:02:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.988 23:02:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:04.988 ************************************ 00:21:04.988 END TEST xnvme_bdevperf 00:21:04.988 ************************************ 00:21:04.988 23:02:32 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:04.988 23:02:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:04.988 23:02:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.988 23:02:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:04.988 ************************************ 00:21:04.988 START TEST xnvme_fio_plugin 00:21:04.988 ************************************ 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:04.988 23:02:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:04.988 { 00:21:04.988 "subsystems": [ 00:21:04.988 { 00:21:04.988 "subsystem": "bdev", 00:21:04.988 "config": [ 00:21:04.988 { 00:21:04.988 "params": { 00:21:04.988 "io_mechanism": "libaio", 00:21:04.988 "conserve_cpu": false, 00:21:04.988 "filename": "/dev/nvme0n1", 00:21:04.988 "name": "xnvme_bdev" 00:21:04.988 }, 00:21:04.988 "method": "bdev_xnvme_create" 00:21:04.988 }, 00:21:04.988 { 00:21:04.988 "method": "bdev_wait_for_examine" 00:21:04.988 } 00:21:04.988 ] 00:21:04.988 } 00:21:04.988 ] 00:21:04.988 } 00:21:05.247 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:05.247 fio-3.35 00:21:05.247 Starting 1 thread 00:21:11.806 00:21:11.806 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71053: Mon Dec 9 23:02:38 2024 00:21:11.806 read: IOPS=47.5k, BW=186MiB/s (195MB/s)(929MiB/5001msec) 00:21:11.806 slat (usec): min=4, max=2230, avg=18.33, stdev=22.84 00:21:11.806 clat (usec): min=89, max=6290, avg=801.43, stdev=450.64 00:21:11.806 lat (usec): min=138, max=6357, avg=819.77, stdev=453.02 00:21:11.806 clat percentiles (usec): 00:21:11.806 | 1.00th=[ 180], 5.00th=[ 262], 10.00th=[ 334], 20.00th=[ 445], 00:21:11.806 | 30.00th=[ 545], 40.00th=[ 644], 50.00th=[ 742], 60.00th=[ 840], 00:21:11.806 | 70.00th=[ 947], 80.00th=[ 1074], 90.00th=[ 1270], 95.00th=[ 1483], 00:21:11.806 | 99.00th=[ 2507], 99.50th=[ 3064], 99.90th=[ 4228], 99.95th=[ 4490], 00:21:11.806 | 99.99th=[ 5342] 00:21:11.806 bw ( KiB/s): min=167944, max=216500, per=100.00%, avg=190612.89, stdev=16645.21, samples=9 00:21:11.806 iops : min=41986, max=54125, avg=47653.22, stdev=4161.30, samples=9 00:21:11.806 lat (usec) : 100=0.05%, 250=4.20%, 500=21.04%, 750=25.62%, 1000=23.43% 00:21:11.806 lat (msec) : 2=23.81%, 4=1.71%, 10=0.15% 00:21:11.806 cpu : usr=24.90%, sys=52.64%, ctx=47, majf=0, minf=764 00:21:11.806 IO depths : 1=0.1%, 2=0.7%, 4=3.7%, 8=11.0%, 16=25.9%, 32=56.8%, >=64=1.8% 00:21:11.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.806 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:21:11.806 issued rwts: total=237697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.806 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:11.806 00:21:11.806 Run status group 0 (all jobs): 00:21:11.806 READ: bw=186MiB/s (195MB/s), 186MiB/s-186MiB/s (195MB/s-195MB/s), io=929MiB (974MB), run=5001-5001msec 00:21:12.375 ----------------------------------------------------- 00:21:12.375 Suppressions used: 00:21:12.375 count bytes template 00:21:12.375 1 11 /usr/src/fio/parse.c 00:21:12.375 1 8 libtcmalloc_minimal.so 00:21:12.375 1 904 libcrypto.so 00:21:12.375 ----------------------------------------------------- 00:21:12.375 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:12.375 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:12.376 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:12.376 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:12.376 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:12.376 23:02:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:12.635 { 00:21:12.635 "subsystems": [ 00:21:12.635 { 00:21:12.635 "subsystem": "bdev", 00:21:12.635 "config": [ 00:21:12.635 { 00:21:12.635 "params": { 00:21:12.635 "io_mechanism": "libaio", 00:21:12.635 "conserve_cpu": false, 00:21:12.635 "filename": "/dev/nvme0n1", 00:21:12.635 "name": "xnvme_bdev" 00:21:12.635 }, 00:21:12.635 "method": "bdev_xnvme_create" 00:21:12.635 }, 00:21:12.635 { 00:21:12.635 "method": "bdev_wait_for_examine" 00:21:12.635 } 00:21:12.635 ] 00:21:12.635 } 00:21:12.635 ] 00:21:12.635 } 00:21:12.635 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:12.635 fio-3.35 00:21:12.635 Starting 1 thread 00:21:19.251 00:21:19.251 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71151: Mon Dec 9 23:02:45 2024 00:21:19.251 write: IOPS=40.7k, BW=159MiB/s (167MB/s)(794MiB/5001msec); 0 zone resets 00:21:19.251 slat (usec): min=4, max=908, avg=21.43, stdev=27.87 00:21:19.251 clat (usec): min=79, max=6151, avg=929.58, stdev=615.93 00:21:19.251 lat (usec): min=97, max=6201, avg=951.01, stdev=621.97 00:21:19.251 clat percentiles (usec): 00:21:19.251 | 1.00th=[ 196], 5.00th=[ 281], 10.00th=[ 359], 20.00th=[ 482], 00:21:19.251 | 30.00th=[ 594], 40.00th=[ 701], 50.00th=[ 816], 60.00th=[ 930], 00:21:19.251 | 70.00th=[ 1057], 80.00th=[ 1205], 90.00th=[ 1516], 95.00th=[ 2057], 00:21:19.251 | 99.00th=[ 3556], 99.50th=[ 4047], 99.90th=[ 4686], 99.95th=[ 4948], 00:21:19.251 | 99.99th=[ 5407] 00:21:19.251 bw ( KiB/s): min=140968, max=179896, per=99.94%, avg=162555.56, stdev=12167.18, samples=9 00:21:19.251 iops : min=35242, max=44974, avg=40638.89, stdev=3041.79, samples=9 00:21:19.251 lat (usec) : 100=0.03%, 250=3.13%, 500=18.33%, 750=22.59%, 1000=22.10% 00:21:19.251 lat (msec) : 2=28.54%, 4=4.75%, 10=0.53% 00:21:19.251 cpu : usr=25.66%, sys=53.22%, ctx=59, majf=0, minf=765 00:21:19.251 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=11.1%, 16=26.0%, 32=56.1%, >=64=1.8% 00:21:19.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.251 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:21:19.251 issued rwts: total=0,203364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.251 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:19.251 00:21:19.251 Run status group 0 (all jobs): 00:21:19.251 WRITE: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=794MiB (833MB), run=5001-5001msec 00:21:20.186 ----------------------------------------------------- 00:21:20.186 Suppressions used: 00:21:20.186 count bytes template 00:21:20.186 1 11 /usr/src/fio/parse.c 00:21:20.186 1 8 libtcmalloc_minimal.so 00:21:20.186 1 904 libcrypto.so 00:21:20.186 ----------------------------------------------------- 00:21:20.186 00:21:20.186 00:21:20.186 real 0m15.115s 00:21:20.186 user 0m6.418s 00:21:20.186 sys 0m6.173s 00:21:20.186 ************************************ 00:21:20.186 END TEST xnvme_fio_plugin 00:21:20.186 ************************************ 00:21:20.186 23:02:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.186 23:02:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:20.186 23:02:47 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:20.186 23:02:47 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:21:20.186 23:02:47 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:21:20.186 23:02:47 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:20.186 23:02:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.186 23:02:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.186 23:02:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:20.186 ************************************ 00:21:20.186 START TEST xnvme_rpc 00:21:20.186 ************************************ 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71239 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71239 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71239 ']' 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.186 23:02:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.186 [2024-12-09 23:02:47.398919] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:21:20.186 [2024-12-09 23:02:47.399069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71239 ] 00:21:20.445 [2024-12-09 23:02:47.579875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.445 [2024-12-09 23:02:47.704097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.379 xnvme_bdev 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.379 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71239 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71239 ']' 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71239 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71239 00:21:21.637 killing process with pid 71239 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71239' 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71239 00:21:21.637 23:02:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71239 00:21:24.227 00:21:24.227 real 0m4.112s 00:21:24.227 user 0m4.060s 00:21:24.227 sys 0m0.643s 00:21:24.227 ************************************ 00:21:24.227 END TEST xnvme_rpc 00:21:24.227 ************************************ 00:21:24.227 23:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.227 23:02:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:24.227 23:02:51 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:24.227 23:02:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:24.227 23:02:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.227 23:02:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:24.227 ************************************ 00:21:24.227 START TEST xnvme_bdevperf 00:21:24.227 ************************************ 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:24.227 23:02:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:24.227 { 00:21:24.227 "subsystems": [ 00:21:24.227 { 00:21:24.227 "subsystem": "bdev", 00:21:24.227 "config": [ 00:21:24.227 { 00:21:24.227 "params": { 00:21:24.227 "io_mechanism": "libaio", 00:21:24.227 "conserve_cpu": true, 00:21:24.227 "filename": "/dev/nvme0n1", 00:21:24.227 "name": "xnvme_bdev" 00:21:24.227 }, 00:21:24.227 "method": "bdev_xnvme_create" 00:21:24.227 }, 00:21:24.227 { 00:21:24.227 "method": "bdev_wait_for_examine" 00:21:24.227 } 00:21:24.227 ] 00:21:24.227 } 00:21:24.227 ] 00:21:24.227 } 00:21:24.486 [2024-12-09 23:02:51.569574] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:21:24.486 [2024-12-09 23:02:51.569950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71325 ] 00:21:24.486 [2024-12-09 23:02:51.755023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.745 [2024-12-09 23:02:51.878721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.003 Running I/O for 5 seconds... 00:21:27.317 42422.00 IOPS, 165.71 MiB/s [2024-12-09T23:02:55.593Z] 41501.00 IOPS, 162.11 MiB/s [2024-12-09T23:02:56.530Z] 41691.00 IOPS, 162.86 MiB/s [2024-12-09T23:02:57.465Z] 42176.50 IOPS, 164.75 MiB/s [2024-12-09T23:02:57.465Z] 41827.60 IOPS, 163.39 MiB/s 00:21:30.129 Latency(us) 00:21:30.129 [2024-12-09T23:02:57.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.130 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:21:30.130 xnvme_bdev : 5.00 41801.93 163.29 0.00 0.00 1527.18 529.68 5606.09 00:21:30.130 [2024-12-09T23:02:57.466Z] =================================================================================================================== 00:21:30.130 [2024-12-09T23:02:57.466Z] Total : 41801.93 163.29 0.00 0.00 1527.18 529.68 5606.09 00:21:31.504 23:02:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:31.504 23:02:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:21:31.504 23:02:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:31.504 23:02:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:31.504 23:02:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:31.504 { 00:21:31.504 "subsystems": [ 00:21:31.504 { 00:21:31.504 "subsystem": "bdev", 00:21:31.504 "config": [ 00:21:31.504 { 00:21:31.504 "params": { 00:21:31.504 "io_mechanism": "libaio", 00:21:31.504 "conserve_cpu": true, 00:21:31.504 "filename": "/dev/nvme0n1", 00:21:31.504 "name": "xnvme_bdev" 00:21:31.504 }, 00:21:31.504 "method": "bdev_xnvme_create" 00:21:31.504 }, 00:21:31.504 { 00:21:31.504 "method": "bdev_wait_for_examine" 00:21:31.504 } 00:21:31.504 ] 00:21:31.504 } 00:21:31.504 ] 00:21:31.504 } 00:21:31.504 [2024-12-09 23:02:58.581255] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:21:31.504 [2024-12-09 23:02:58.581392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71406 ] 00:21:31.504 [2024-12-09 23:02:58.764772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.762 [2024-12-09 23:02:58.891921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.021 Running I/O for 5 seconds... 00:21:33.965 38212.00 IOPS, 149.27 MiB/s [2024-12-09T23:03:02.678Z] 37263.50 IOPS, 145.56 MiB/s [2024-12-09T23:03:03.615Z] 37821.00 IOPS, 147.74 MiB/s [2024-12-09T23:03:04.550Z] 35446.00 IOPS, 138.46 MiB/s 00:21:37.214 Latency(us) 00:21:37.214 [2024-12-09T23:03:04.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.214 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:21:37.214 xnvme_bdev : 5.00 36232.55 141.53 0.00 0.00 1762.11 49.14 23371.87 00:21:37.214 [2024-12-09T23:03:04.550Z] =================================================================================================================== 00:21:37.214 [2024-12-09T23:03:04.550Z] Total : 36232.55 141.53 0.00 0.00 1762.11 49.14 23371.87 00:21:38.151 ************************************ 00:21:38.151 END TEST xnvme_bdevperf 00:21:38.151 ************************************ 00:21:38.151 00:21:38.151 real 0m14.015s 00:21:38.151 user 0m5.781s 00:21:38.151 sys 0m5.655s 00:21:38.151 23:03:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.151 23:03:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:38.410 23:03:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:21:38.410 23:03:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:38.410 23:03:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.410 23:03:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:38.410 ************************************ 00:21:38.410 START TEST xnvme_fio_plugin 00:21:38.410 ************************************ 00:21:38.410 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:21:38.410 23:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:21:38.410 23:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:21:38.410 23:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:38.410 23:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:38.411 23:03:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:38.411 { 00:21:38.411 "subsystems": [ 00:21:38.411 { 00:21:38.411 "subsystem": "bdev", 00:21:38.411 "config": [ 00:21:38.411 { 00:21:38.411 "params": { 00:21:38.411 "io_mechanism": "libaio", 00:21:38.411 "conserve_cpu": true, 00:21:38.411 "filename": "/dev/nvme0n1", 00:21:38.411 "name": "xnvme_bdev" 00:21:38.411 }, 00:21:38.411 "method": "bdev_xnvme_create" 00:21:38.411 }, 00:21:38.411 { 00:21:38.411 "method": "bdev_wait_for_examine" 00:21:38.411 } 00:21:38.411 ] 00:21:38.411 } 00:21:38.411 ] 00:21:38.411 } 00:21:38.669 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:38.669 fio-3.35 00:21:38.669 Starting 1 thread 00:21:45.235 00:21:45.235 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71531: Mon Dec 9 23:03:11 2024 00:21:45.235 read: IOPS=42.8k, BW=167MiB/s (175MB/s)(837MiB/5001msec) 00:21:45.235 slat (usec): min=4, max=3406, avg=20.41, stdev=28.46 00:21:45.235 clat (usec): min=92, max=5416, avg=881.23, stdev=583.35 00:21:45.235 lat (usec): min=142, max=5514, avg=901.64, stdev=588.92 00:21:45.235 clat percentiles (usec): 00:21:45.235 | 1.00th=[ 188], 5.00th=[ 269], 10.00th=[ 338], 20.00th=[ 461], 00:21:45.235 | 30.00th=[ 570], 40.00th=[ 676], 50.00th=[ 783], 60.00th=[ 889], 00:21:45.235 | 70.00th=[ 1004], 80.00th=[ 1139], 90.00th=[ 1385], 95.00th=[ 1876], 00:21:45.235 | 99.00th=[ 3490], 99.50th=[ 3982], 99.90th=[ 4621], 99.95th=[ 4817], 00:21:45.235 | 99.99th=[ 5145] 00:21:45.235 bw ( KiB/s): min=145160, max=179672, per=99.23%, avg=170051.56, stdev=10722.55, samples=9 00:21:45.235 iops : min=36290, max=44918, avg=42512.78, stdev=2680.57, samples=9 00:21:45.235 lat (usec) : 100=0.03%, 250=3.99%, 500=19.45%, 750=23.32%, 1000=22.91% 00:21:45.235 lat (msec) : 2=25.90%, 4=3.91%, 10=0.50% 00:21:45.235 cpu : usr=26.42%, sys=53.70%, ctx=150, majf=0, minf=764 00:21:45.235 IO depths : 1=0.1%, 2=0.9%, 4=3.9%, 8=10.8%, 16=26.0%, 32=56.5%, >=64=1.8% 00:21:45.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:45.235 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:21:45.235 issued rwts: total=214264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:45.235 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:45.235 00:21:45.235 Run status group 0 (all jobs): 00:21:45.235 READ: bw=167MiB/s (175MB/s), 167MiB/s-167MiB/s (175MB/s-175MB/s), io=837MiB (878MB), run=5001-5001msec 00:21:45.909 ----------------------------------------------------- 00:21:45.909 Suppressions used: 00:21:45.909 count bytes template 00:21:45.909 1 11 /usr/src/fio/parse.c 00:21:45.909 1 8 libtcmalloc_minimal.so 00:21:45.909 1 904 libcrypto.so 00:21:45.909 ----------------------------------------------------- 00:21:45.909 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:45.909 23:03:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:21:45.909 { 00:21:45.909 "subsystems": [ 00:21:45.909 { 00:21:45.909 "subsystem": "bdev", 00:21:45.909 "config": [ 00:21:45.909 { 00:21:45.909 "params": { 00:21:45.909 "io_mechanism": "libaio", 00:21:45.909 "conserve_cpu": true, 00:21:45.909 "filename": "/dev/nvme0n1", 00:21:45.909 "name": "xnvme_bdev" 00:21:45.909 }, 00:21:45.909 "method": "bdev_xnvme_create" 00:21:45.909 }, 00:21:45.909 { 00:21:45.909 "method": "bdev_wait_for_examine" 00:21:45.909 } 00:21:45.909 ] 00:21:45.909 } 00:21:45.909 ] 00:21:45.909 } 00:21:46.169 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:21:46.169 fio-3.35 00:21:46.169 Starting 1 thread 00:21:52.751 00:21:52.751 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71629: Mon Dec 9 23:03:19 2024 00:21:52.751 write: IOPS=44.3k, BW=173MiB/s (182MB/s)(866MiB/5001msec); 0 zone resets 00:21:52.751 slat (usec): min=4, max=941, avg=19.39, stdev=27.77 00:21:52.751 clat (usec): min=66, max=7463, avg=872.33, stdev=590.41 00:21:52.751 lat (usec): min=71, max=7646, avg=891.71, stdev=596.60 00:21:52.751 clat percentiles (usec): 00:21:52.751 | 1.00th=[ 190], 5.00th=[ 277], 10.00th=[ 351], 20.00th=[ 478], 00:21:52.751 | 30.00th=[ 578], 40.00th=[ 676], 50.00th=[ 775], 60.00th=[ 865], 00:21:52.751 | 70.00th=[ 963], 80.00th=[ 1090], 90.00th=[ 1352], 95.00th=[ 1893], 00:21:52.751 | 99.00th=[ 3523], 99.50th=[ 4047], 99.90th=[ 4817], 99.95th=[ 5080], 00:21:52.751 | 99.99th=[ 5866] 00:21:52.751 bw ( KiB/s): min=144072, max=221648, per=100.00%, avg=180552.00, stdev=23253.52, samples=9 00:21:52.751 iops : min=36018, max=55412, avg=45138.00, stdev=5813.38, samples=9 00:21:52.751 lat (usec) : 100=0.03%, 250=3.53%, 500=18.74%, 750=25.44%, 1000=25.85% 00:21:52.751 lat (msec) : 2=21.88%, 4=3.98%, 10=0.55% 00:21:52.751 cpu : usr=28.68%, sys=52.02%, ctx=143, majf=0, minf=765 00:21:52.751 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=10.4%, 16=25.5%, 32=57.4%, >=64=1.9% 00:21:52.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.751 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:21:52.751 issued rwts: total=0,221770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.751 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:52.751 00:21:52.751 Run status group 0 (all jobs): 00:21:52.751 WRITE: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=866MiB (908MB), run=5001-5001msec 00:21:53.419 ----------------------------------------------------- 00:21:53.419 Suppressions used: 00:21:53.419 count bytes template 00:21:53.419 1 11 /usr/src/fio/parse.c 00:21:53.419 1 8 libtcmalloc_minimal.so 00:21:53.419 1 904 libcrypto.so 00:21:53.419 ----------------------------------------------------- 00:21:53.419 00:21:53.419 00:21:53.419 real 0m15.140s 00:21:53.419 user 0m6.693s 00:21:53.419 sys 0m6.164s 00:21:53.420 23:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.420 ************************************ 00:21:53.420 END TEST xnvme_fio_plugin 00:21:53.420 ************************************ 00:21:53.420 23:03:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:21:53.679 23:03:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:21:53.679 23:03:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.679 23:03:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.679 23:03:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:53.679 ************************************ 00:21:53.679 START TEST xnvme_rpc 00:21:53.679 ************************************ 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71721 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71721 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71721 ']' 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.679 23:03:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:53.679 [2024-12-09 23:03:20.891521] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:21:53.679 [2024-12-09 23:03:20.891929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71721 ] 00:21:53.938 [2024-12-09 23:03:21.075314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.938 [2024-12-09 23:03:21.205777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.873 xnvme_bdev 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.873 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.131 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71721 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71721 ']' 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71721 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71721 00:21:55.132 killing process with pid 71721 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71721' 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71721 00:21:55.132 23:03:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71721 00:21:57.731 00:21:57.731 real 0m4.131s 00:21:57.731 user 0m4.139s 00:21:57.731 sys 0m0.647s 00:21:57.731 23:03:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.731 23:03:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:57.731 ************************************ 00:21:57.731 END TEST xnvme_rpc 00:21:57.731 ************************************ 00:21:57.731 23:03:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:21:57.731 23:03:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:57.731 23:03:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.731 23:03:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:57.731 ************************************ 00:21:57.731 START TEST xnvme_bdevperf 00:21:57.731 ************************************ 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:21:57.731 23:03:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:57.731 { 00:21:57.731 "subsystems": [ 00:21:57.731 { 00:21:57.731 "subsystem": "bdev", 00:21:57.731 "config": [ 00:21:57.731 { 00:21:57.731 "params": { 00:21:57.731 "io_mechanism": "io_uring", 00:21:57.731 "conserve_cpu": false, 00:21:57.731 "filename": "/dev/nvme0n1", 00:21:57.731 "name": "xnvme_bdev" 00:21:57.731 }, 00:21:57.731 "method": "bdev_xnvme_create" 00:21:57.731 }, 00:21:57.731 { 00:21:57.731 "method": "bdev_wait_for_examine" 00:21:57.731 } 00:21:57.731 ] 00:21:57.731 } 00:21:57.731 ] 00:21:57.731 } 00:21:57.988 [2024-12-09 23:03:25.074308] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:21:57.988 [2024-12-09 23:03:25.074446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71802 ] 00:21:57.988 [2024-12-09 23:03:25.258192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.246 [2024-12-09 23:03:25.378852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.504 Running I/O for 5 seconds... 00:22:00.819 43463.00 IOPS, 169.78 MiB/s [2024-12-09T23:03:29.090Z] 41164.00 IOPS, 160.80 MiB/s [2024-12-09T23:03:30.027Z] 39884.00 IOPS, 155.80 MiB/s [2024-12-09T23:03:30.964Z] 36761.00 IOPS, 143.60 MiB/s [2024-12-09T23:03:30.964Z] 35335.20 IOPS, 138.03 MiB/s 00:22:03.628 Latency(us) 00:22:03.628 [2024-12-09T23:03:30.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.628 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:03.628 xnvme_bdev : 5.01 35312.36 137.94 0.00 0.00 1807.44 345.45 10475.23 00:22:03.628 [2024-12-09T23:03:30.964Z] =================================================================================================================== 00:22:03.628 [2024-12-09T23:03:30.964Z] Total : 35312.36 137.94 0.00 0.00 1807.44 345.45 10475.23 00:22:04.705 23:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:04.705 23:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:04.705 23:03:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:04.705 23:03:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:04.705 23:03:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:04.705 { 00:22:04.705 "subsystems": [ 00:22:04.705 { 00:22:04.705 "subsystem": "bdev", 00:22:04.705 "config": [ 00:22:04.705 { 00:22:04.705 "params": { 00:22:04.705 "io_mechanism": "io_uring", 00:22:04.705 "conserve_cpu": false, 00:22:04.705 "filename": "/dev/nvme0n1", 00:22:04.705 "name": "xnvme_bdev" 00:22:04.705 }, 00:22:04.705 "method": "bdev_xnvme_create" 00:22:04.705 }, 00:22:04.705 { 00:22:04.705 "method": "bdev_wait_for_examine" 00:22:04.705 } 00:22:04.705 ] 00:22:04.705 } 00:22:04.705 ] 00:22:04.705 } 00:22:04.963 [2024-12-09 23:03:32.083555] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:22:04.963 [2024-12-09 23:03:32.083695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71888 ] 00:22:04.963 [2024-12-09 23:03:32.264430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.222 [2024-12-09 23:03:32.391551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.481 Running I/O for 5 seconds... 00:22:07.799 29833.00 IOPS, 116.54 MiB/s [2024-12-09T23:03:36.105Z] 29875.50 IOPS, 116.70 MiB/s [2024-12-09T23:03:37.042Z] 28682.33 IOPS, 112.04 MiB/s [2024-12-09T23:03:38.005Z] 28150.25 IOPS, 109.96 MiB/s 00:22:10.669 Latency(us) 00:22:10.669 [2024-12-09T23:03:38.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.669 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:10.669 xnvme_bdev : 5.00 28213.22 110.21 0.00 0.00 2262.64 114.33 63588.34 00:22:10.669 [2024-12-09T23:03:38.005Z] =================================================================================================================== 00:22:10.669 [2024-12-09T23:03:38.005Z] Total : 28213.22 110.21 0.00 0.00 2262.64 114.33 63588.34 00:22:12.058 00:22:12.058 real 0m14.031s 00:22:12.058 user 0m6.428s 00:22:12.058 sys 0m7.347s 00:22:12.058 23:03:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.058 ************************************ 00:22:12.058 END TEST xnvme_bdevperf 00:22:12.058 ************************************ 00:22:12.058 23:03:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:12.058 23:03:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:22:12.058 23:03:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:12.058 23:03:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.058 23:03:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:12.058 ************************************ 00:22:12.058 START TEST xnvme_fio_plugin 00:22:12.058 ************************************ 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:12.058 { 00:22:12.058 "subsystems": [ 00:22:12.058 { 00:22:12.058 "subsystem": "bdev", 00:22:12.058 "config": [ 00:22:12.058 { 00:22:12.058 "params": { 00:22:12.058 "io_mechanism": "io_uring", 00:22:12.058 "conserve_cpu": false, 00:22:12.058 "filename": "/dev/nvme0n1", 00:22:12.058 "name": "xnvme_bdev" 00:22:12.058 }, 00:22:12.058 "method": "bdev_xnvme_create" 00:22:12.058 }, 00:22:12.058 { 00:22:12.058 "method": "bdev_wait_for_examine" 00:22:12.058 } 00:22:12.058 ] 00:22:12.058 } 00:22:12.058 ] 00:22:12.058 } 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:12.058 23:03:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:12.058 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:12.058 fio-3.35 00:22:12.058 Starting 1 thread 00:22:18.681 00:22:18.681 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72007: Mon Dec 9 23:03:45 2024 00:22:18.681 read: IOPS=32.6k, BW=127MiB/s (134MB/s)(637MiB/5002msec) 00:22:18.681 slat (nsec): min=2352, max=81423, avg=5196.71, stdev=2292.37 00:22:18.681 clat (usec): min=811, max=7680, avg=1757.20, stdev=398.69 00:22:18.681 lat (usec): min=814, max=7688, avg=1762.40, stdev=399.88 00:22:18.681 clat percentiles (usec): 00:22:18.681 | 1.00th=[ 971], 5.00th=[ 1057], 10.00th=[ 1156], 20.00th=[ 1450], 00:22:18.681 | 30.00th=[ 1631], 40.00th=[ 1713], 50.00th=[ 1795], 60.00th=[ 1860], 00:22:18.681 | 70.00th=[ 1942], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2343], 00:22:18.681 | 99.00th=[ 2704], 99.50th=[ 2900], 99.90th=[ 3392], 99.95th=[ 3556], 00:22:18.681 | 99.99th=[ 7570] 00:22:18.681 bw ( KiB/s): min=102400, max=190464, per=100.00%, avg=131356.44, stdev=25272.93, samples=9 00:22:18.681 iops : min=25600, max=47616, avg=32839.11, stdev=6318.23, samples=9 00:22:18.681 lat (usec) : 1000=2.06% 00:22:18.681 lat (msec) : 2=75.10%, 4=22.80%, 10=0.04% 00:22:18.681 cpu : usr=30.25%, sys=68.51%, ctx=13, majf=0, minf=762 00:22:18.681 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:18.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:18.681 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:18.681 issued rwts: total=163072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:18.681 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:18.681 00:22:18.681 Run status group 0 (all jobs): 00:22:18.681 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=637MiB (668MB), run=5002-5002msec 00:22:19.249 ----------------------------------------------------- 00:22:19.250 Suppressions used: 00:22:19.250 count bytes template 00:22:19.250 1 11 /usr/src/fio/parse.c 00:22:19.250 1 8 libtcmalloc_minimal.so 00:22:19.250 1 904 libcrypto.so 00:22:19.250 ----------------------------------------------------- 00:22:19.250 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:19.508 23:03:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:19.508 { 00:22:19.508 "subsystems": [ 00:22:19.508 { 00:22:19.508 "subsystem": "bdev", 00:22:19.508 "config": [ 00:22:19.508 { 00:22:19.508 "params": { 00:22:19.508 "io_mechanism": "io_uring", 00:22:19.508 "conserve_cpu": false, 00:22:19.508 "filename": "/dev/nvme0n1", 00:22:19.508 "name": "xnvme_bdev" 00:22:19.508 }, 00:22:19.508 "method": "bdev_xnvme_create" 00:22:19.508 }, 00:22:19.508 { 00:22:19.508 "method": "bdev_wait_for_examine" 00:22:19.508 } 00:22:19.508 ] 00:22:19.508 } 00:22:19.508 ] 00:22:19.508 } 00:22:19.767 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:19.767 fio-3.35 00:22:19.767 Starting 1 thread 00:22:26.333 00:22:26.333 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72105: Mon Dec 9 23:03:52 2024 00:22:26.333 write: IOPS=29.0k, BW=113MiB/s (119MB/s)(567MiB/5002msec); 0 zone resets 00:22:26.333 slat (usec): min=4, max=198, avg= 6.13, stdev= 2.45 00:22:26.333 clat (usec): min=1296, max=8215, avg=1963.77, stdev=332.58 00:22:26.333 lat (usec): min=1301, max=8225, avg=1969.89, stdev=333.43 00:22:26.333 clat percentiles (usec): 00:22:26.333 | 1.00th=[ 1483], 5.00th=[ 1565], 10.00th=[ 1631], 20.00th=[ 1696], 00:22:26.333 | 30.00th=[ 1778], 40.00th=[ 1844], 50.00th=[ 1909], 60.00th=[ 1975], 00:22:26.333 | 70.00th=[ 2073], 80.00th=[ 2212], 90.00th=[ 2376], 95.00th=[ 2540], 00:22:26.333 | 99.00th=[ 2933], 99.50th=[ 3130], 99.90th=[ 3523], 99.95th=[ 3916], 00:22:26.333 | 99.99th=[ 8094] 00:22:26.333 bw ( KiB/s): min=101376, max=135168, per=100.00%, avg=116906.67, stdev=10179.02, samples=9 00:22:26.333 iops : min=25344, max=33792, avg=29226.67, stdev=2544.75, samples=9 00:22:26.333 lat (msec) : 2=62.21%, 4=37.75%, 10=0.04% 00:22:26.333 cpu : usr=31.83%, sys=66.91%, ctx=9, majf=0, minf=763 00:22:26.333 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:26.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.333 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:26.333 issued rwts: total=0,145024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.333 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:26.333 00:22:26.334 Run status group 0 (all jobs): 00:22:26.334 WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=567MiB (594MB), run=5002-5002msec 00:22:26.901 ----------------------------------------------------- 00:22:26.901 Suppressions used: 00:22:26.901 count bytes template 00:22:26.901 1 11 /usr/src/fio/parse.c 00:22:26.901 1 8 libtcmalloc_minimal.so 00:22:26.901 1 904 libcrypto.so 00:22:26.901 ----------------------------------------------------- 00:22:26.901 00:22:26.901 ************************************ 00:22:26.901 END TEST xnvme_fio_plugin 00:22:26.901 ************************************ 00:22:26.901 00:22:26.901 real 0m15.002s 00:22:26.901 user 0m7.052s 00:22:26.901 sys 0m7.570s 00:22:26.901 23:03:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.901 23:03:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:26.901 23:03:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:22:26.901 23:03:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:22:26.902 23:03:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:22:26.902 23:03:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:22:26.902 23:03:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:26.902 23:03:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.902 23:03:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:26.902 ************************************ 00:22:26.902 START TEST xnvme_rpc 00:22:26.902 ************************************ 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72198 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72198 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72198 ']' 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.902 23:03:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:27.160 [2024-12-09 23:03:54.275180] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:22:27.160 [2024-12-09 23:03:54.275577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72198 ] 00:22:27.160 [2024-12-09 23:03:54.460032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.419 [2024-12-09 23:03:54.578924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.352 xnvme_bdev 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:22:28.352 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:22:28.353 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:22:28.353 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.353 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.611 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.611 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:22:28.611 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:22:28.611 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.611 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72198 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72198 ']' 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72198 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72198 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72198' 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72198 00:22:28.612 23:03:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72198 00:22:28.612 killing process with pid 72198 00:22:31.163 00:22:31.163 real 0m4.119s 00:22:31.163 user 0m4.105s 00:22:31.163 sys 0m0.596s 00:22:31.163 23:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.163 23:03:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:31.163 ************************************ 00:22:31.163 END TEST xnvme_rpc 00:22:31.163 ************************************ 00:22:31.163 23:03:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:22:31.163 23:03:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:31.163 23:03:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.163 23:03:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:31.163 ************************************ 00:22:31.163 START TEST xnvme_bdevperf 00:22:31.163 ************************************ 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:31.163 23:03:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:31.163 { 00:22:31.163 "subsystems": [ 00:22:31.163 { 00:22:31.163 "subsystem": "bdev", 00:22:31.163 "config": [ 00:22:31.163 { 00:22:31.163 "params": { 00:22:31.163 "io_mechanism": "io_uring", 00:22:31.163 "conserve_cpu": true, 00:22:31.163 "filename": "/dev/nvme0n1", 00:22:31.163 "name": "xnvme_bdev" 00:22:31.163 }, 00:22:31.163 "method": "bdev_xnvme_create" 00:22:31.163 }, 00:22:31.163 { 00:22:31.163 "method": "bdev_wait_for_examine" 00:22:31.163 } 00:22:31.163 ] 00:22:31.163 } 00:22:31.163 ] 00:22:31.163 } 00:22:31.163 [2024-12-09 23:03:58.444627] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:22:31.163 [2024-12-09 23:03:58.444787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72283 ] 00:22:31.420 [2024-12-09 23:03:58.628506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.678 [2024-12-09 23:03:58.758424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.936 Running I/O for 5 seconds... 00:22:34.252 36544.00 IOPS, 142.75 MiB/s [2024-12-09T23:04:02.521Z] 39520.00 IOPS, 154.38 MiB/s [2024-12-09T23:04:03.458Z] 40980.67 IOPS, 160.08 MiB/s [2024-12-09T23:04:04.513Z] 42414.75 IOPS, 165.68 MiB/s [2024-12-09T23:04:04.513Z] 43250.20 IOPS, 168.95 MiB/s 00:22:37.177 Latency(us) 00:22:37.177 [2024-12-09T23:04:04.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.177 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:37.177 xnvme_bdev : 5.00 43241.77 168.91 0.00 0.00 1476.10 779.72 6316.72 00:22:37.177 [2024-12-09T23:04:04.513Z] =================================================================================================================== 00:22:37.177 [2024-12-09T23:04:04.513Z] Total : 43241.77 168.91 0.00 0.00 1476.10 779.72 6316.72 00:22:38.114 23:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:38.114 23:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:22:38.114 23:04:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:22:38.114 23:04:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:38.114 23:04:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:38.114 { 00:22:38.114 "subsystems": [ 00:22:38.114 { 00:22:38.114 "subsystem": "bdev", 00:22:38.114 "config": [ 00:22:38.114 { 00:22:38.114 "params": { 00:22:38.114 "io_mechanism": "io_uring", 00:22:38.114 "conserve_cpu": true, 00:22:38.114 "filename": "/dev/nvme0n1", 00:22:38.114 "name": "xnvme_bdev" 00:22:38.114 }, 00:22:38.114 "method": "bdev_xnvme_create" 00:22:38.114 }, 00:22:38.114 { 00:22:38.114 "method": "bdev_wait_for_examine" 00:22:38.114 } 00:22:38.114 ] 00:22:38.114 } 00:22:38.114 ] 00:22:38.114 } 00:22:38.373 [2024-12-09 23:04:05.454396] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:22:38.373 [2024-12-09 23:04:05.454568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72365 ] 00:22:38.373 [2024-12-09 23:04:05.638864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.632 [2024-12-09 23:04:05.764467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.890 Running I/O for 5 seconds... 00:22:41.199 33152.00 IOPS, 129.50 MiB/s [2024-12-09T23:04:09.495Z] 33440.00 IOPS, 130.62 MiB/s [2024-12-09T23:04:10.434Z] 33408.00 IOPS, 130.50 MiB/s [2024-12-09T23:04:11.370Z] 33800.00 IOPS, 132.03 MiB/s [2024-12-09T23:04:11.370Z] 33184.00 IOPS, 129.62 MiB/s 00:22:44.034 Latency(us) 00:22:44.034 [2024-12-09T23:04:11.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.034 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:22:44.034 xnvme_bdev : 5.00 33169.55 129.57 0.00 0.00 1924.06 1006.73 5106.02 00:22:44.034 [2024-12-09T23:04:11.370Z] =================================================================================================================== 00:22:44.034 [2024-12-09T23:04:11.370Z] Total : 33169.55 129.57 0.00 0.00 1924.06 1006.73 5106.02 00:22:45.414 00:22:45.414 real 0m13.996s 00:22:45.414 user 0m8.068s 00:22:45.414 sys 0m5.446s 00:22:45.414 23:04:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.414 ************************************ 00:22:45.414 END TEST xnvme_bdevperf 00:22:45.414 ************************************ 00:22:45.414 23:04:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:45.414 23:04:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:22:45.414 23:04:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:45.414 23:04:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.414 23:04:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:45.414 ************************************ 00:22:45.414 START TEST xnvme_fio_plugin 00:22:45.414 ************************************ 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:45.414 23:04:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:45.414 { 00:22:45.414 "subsystems": [ 00:22:45.414 { 00:22:45.414 "subsystem": "bdev", 00:22:45.414 "config": [ 00:22:45.414 { 00:22:45.414 "params": { 00:22:45.414 "io_mechanism": "io_uring", 00:22:45.414 "conserve_cpu": true, 00:22:45.414 "filename": "/dev/nvme0n1", 00:22:45.414 "name": "xnvme_bdev" 00:22:45.414 }, 00:22:45.414 "method": "bdev_xnvme_create" 00:22:45.414 }, 00:22:45.414 { 00:22:45.414 "method": "bdev_wait_for_examine" 00:22:45.414 } 00:22:45.414 ] 00:22:45.414 } 00:22:45.414 ] 00:22:45.414 } 00:22:45.414 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:45.414 fio-3.35 00:22:45.414 Starting 1 thread 00:22:51.989 00:22:51.989 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72491: Mon Dec 9 23:04:18 2024 00:22:51.989 read: IOPS=32.7k, BW=128MiB/s (134MB/s)(639MiB/5001msec) 00:22:51.989 slat (usec): min=2, max=375, avg= 5.14, stdev= 3.51 00:22:51.989 clat (usec): min=435, max=3621, avg=1750.86, stdev=300.30 00:22:51.989 lat (usec): min=439, max=3630, avg=1756.00, stdev=301.14 00:22:51.989 clat percentiles (usec): 00:22:51.989 | 1.00th=[ 1287], 5.00th=[ 1385], 10.00th=[ 1434], 20.00th=[ 1516], 00:22:51.989 | 30.00th=[ 1565], 40.00th=[ 1631], 50.00th=[ 1680], 60.00th=[ 1745], 00:22:51.989 | 70.00th=[ 1844], 80.00th=[ 1958], 90.00th=[ 2180], 95.00th=[ 2343], 00:22:51.989 | 99.00th=[ 2704], 99.50th=[ 2868], 99.90th=[ 3261], 99.95th=[ 3392], 00:22:51.989 | 99.99th=[ 3556] 00:22:51.989 bw ( KiB/s): min=111872, max=149248, per=100.00%, avg=131612.44, stdev=11662.37, samples=9 00:22:51.989 iops : min=27968, max=37312, avg=32903.11, stdev=2915.59, samples=9 00:22:51.989 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:22:51.989 lat (msec) : 2=82.12%, 4=17.83% 00:22:51.989 cpu : usr=47.80%, sys=47.94%, ctx=97, majf=0, minf=762 00:22:51.989 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:51.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.989 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:22:51.989 issued rwts: total=163695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.989 00:22:51.989 Run status group 0 (all jobs): 00:22:51.989 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=639MiB (670MB), run=5001-5001msec 00:22:52.934 ----------------------------------------------------- 00:22:52.934 Suppressions used: 00:22:52.934 count bytes template 00:22:52.934 1 11 /usr/src/fio/parse.c 00:22:52.934 1 8 libtcmalloc_minimal.so 00:22:52.934 1 904 libcrypto.so 00:22:52.934 ----------------------------------------------------- 00:22:52.934 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:52.934 23:04:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:22:52.934 { 00:22:52.934 "subsystems": [ 00:22:52.934 { 00:22:52.934 "subsystem": "bdev", 00:22:52.934 "config": [ 00:22:52.934 { 00:22:52.934 "params": { 00:22:52.934 "io_mechanism": "io_uring", 00:22:52.934 "conserve_cpu": true, 00:22:52.934 "filename": "/dev/nvme0n1", 00:22:52.934 "name": "xnvme_bdev" 00:22:52.934 }, 00:22:52.934 "method": "bdev_xnvme_create" 00:22:52.934 }, 00:22:52.934 { 00:22:52.934 "method": "bdev_wait_for_examine" 00:22:52.934 } 00:22:52.934 ] 00:22:52.934 } 00:22:52.934 ] 00:22:52.934 } 00:22:53.194 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:22:53.194 fio-3.35 00:22:53.194 Starting 1 thread 00:22:59.757 00:22:59.757 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72590: Mon Dec 9 23:04:26 2024 00:22:59.757 write: IOPS=32.7k, BW=128MiB/s (134MB/s)(638MiB/5001msec); 0 zone resets 00:22:59.757 slat (usec): min=3, max=148, avg= 5.28, stdev= 2.15 00:22:59.757 clat (usec): min=1206, max=7226, avg=1750.45, stdev=271.41 00:22:59.757 lat (usec): min=1210, max=7230, avg=1755.73, stdev=272.35 00:22:59.757 clat percentiles (usec): 00:22:59.757 | 1.00th=[ 1352], 5.00th=[ 1418], 10.00th=[ 1467], 20.00th=[ 1532], 00:22:59.757 | 30.00th=[ 1582], 40.00th=[ 1631], 50.00th=[ 1680], 60.00th=[ 1745], 00:22:59.757 | 70.00th=[ 1827], 80.00th=[ 1942], 90.00th=[ 2147], 95.00th=[ 2311], 00:22:59.757 | 99.00th=[ 2606], 99.50th=[ 2704], 99.90th=[ 2868], 99.95th=[ 2966], 00:22:59.757 | 99.99th=[ 3261] 00:22:59.757 bw ( KiB/s): min=116736, max=147968, per=100.00%, avg=131240.00, stdev=10074.82, samples=9 00:22:59.757 iops : min=29184, max=36992, avg=32810.22, stdev=2519.03, samples=9 00:22:59.757 lat (msec) : 2=83.84%, 4=16.16%, 10=0.01% 00:22:59.757 cpu : usr=49.76%, sys=46.86%, ctx=15, majf=0, minf=763 00:22:59.757 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:22:59.757 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.757 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:22:59.757 issued rwts: total=0,163390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.757 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.757 00:22:59.757 Run status group 0 (all jobs): 00:22:59.757 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=638MiB (669MB), run=5001-5001msec 00:23:00.323 ----------------------------------------------------- 00:23:00.323 Suppressions used: 00:23:00.323 count bytes template 00:23:00.323 1 11 /usr/src/fio/parse.c 00:23:00.323 1 8 libtcmalloc_minimal.so 00:23:00.323 1 904 libcrypto.so 00:23:00.323 ----------------------------------------------------- 00:23:00.323 00:23:00.581 00:23:00.581 real 0m15.274s 00:23:00.581 user 0m8.993s 00:23:00.581 sys 0m5.637s 00:23:00.581 23:04:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:00.581 ************************************ 00:23:00.581 END TEST xnvme_fio_plugin 00:23:00.581 ************************************ 00:23:00.581 23:04:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:23:00.581 23:04:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:23:00.581 23:04:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:00.581 23:04:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:00.581 23:04:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:00.581 ************************************ 00:23:00.581 START TEST xnvme_rpc 00:23:00.581 ************************************ 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72681 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72681 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72681 ']' 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.581 23:04:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:00.581 [2024-12-09 23:04:27.863333] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:00.581 [2024-12-09 23:04:27.863535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72681 ] 00:23:00.842 [2024-12-09 23:04:28.040262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.106 [2024-12-09 23:04:28.215069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.054 xnvme_bdev 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.054 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72681 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72681 ']' 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72681 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72681 00:23:02.314 killing process with pid 72681 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72681' 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72681 00:23:02.314 23:04:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72681 00:23:04.855 00:23:04.855 real 0m4.280s 00:23:04.855 user 0m4.358s 00:23:04.855 sys 0m0.666s 00:23:04.855 23:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.855 23:04:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:04.855 ************************************ 00:23:04.855 END TEST xnvme_rpc 00:23:04.855 ************************************ 00:23:04.855 23:04:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:23:04.855 23:04:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:04.855 23:04:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.855 23:04:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:04.855 ************************************ 00:23:04.855 START TEST xnvme_bdevperf 00:23:04.855 ************************************ 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:04.855 23:04:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:04.855 { 00:23:04.855 "subsystems": [ 00:23:04.855 { 00:23:04.855 "subsystem": "bdev", 00:23:04.855 "config": [ 00:23:04.855 { 00:23:04.855 "params": { 00:23:04.855 "io_mechanism": "io_uring_cmd", 00:23:04.855 "conserve_cpu": false, 00:23:04.855 "filename": "/dev/ng0n1", 00:23:04.855 "name": "xnvme_bdev" 00:23:04.855 }, 00:23:04.855 "method": "bdev_xnvme_create" 00:23:04.855 }, 00:23:04.855 { 00:23:04.855 "method": "bdev_wait_for_examine" 00:23:04.855 } 00:23:04.855 ] 00:23:04.855 } 00:23:04.855 ] 00:23:04.855 } 00:23:05.119 [2024-12-09 23:04:32.198020] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:05.119 [2024-12-09 23:04:32.198421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72762 ] 00:23:05.119 [2024-12-09 23:04:32.383879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.378 [2024-12-09 23:04:32.504660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.636 Running I/O for 5 seconds... 00:23:07.945 51263.00 IOPS, 200.25 MiB/s [2024-12-09T23:04:36.215Z] 48003.50 IOPS, 187.51 MiB/s [2024-12-09T23:04:37.148Z] 44794.00 IOPS, 174.98 MiB/s [2024-12-09T23:04:38.085Z] 41803.50 IOPS, 163.29 MiB/s 00:23:10.749 Latency(us) 00:23:10.749 [2024-12-09T23:04:38.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.749 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:23:10.749 xnvme_bdev : 5.00 39920.31 155.94 0.00 0.00 1598.57 314.19 6658.88 00:23:10.749 [2024-12-09T23:04:38.085Z] =================================================================================================================== 00:23:10.749 [2024-12-09T23:04:38.085Z] Total : 39920.31 155.94 0.00 0.00 1598.57 314.19 6658.88 00:23:12.124 23:04:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:12.124 23:04:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:23:12.124 23:04:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:12.124 23:04:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:12.125 23:04:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.125 { 00:23:12.125 "subsystems": [ 00:23:12.125 { 00:23:12.125 "subsystem": "bdev", 00:23:12.125 "config": [ 00:23:12.125 { 00:23:12.125 "params": { 00:23:12.125 "io_mechanism": "io_uring_cmd", 00:23:12.125 "conserve_cpu": false, 00:23:12.125 "filename": "/dev/ng0n1", 00:23:12.125 "name": "xnvme_bdev" 00:23:12.125 }, 00:23:12.125 "method": "bdev_xnvme_create" 00:23:12.125 }, 00:23:12.125 { 00:23:12.125 "method": "bdev_wait_for_examine" 00:23:12.125 } 00:23:12.125 ] 00:23:12.125 } 00:23:12.125 ] 00:23:12.125 } 00:23:12.125 [2024-12-09 23:04:39.164829] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:12.125 [2024-12-09 23:04:39.165211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72842 ] 00:23:12.125 [2024-12-09 23:04:39.346994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.384 [2024-12-09 23:04:39.475566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.644 Running I/O for 5 seconds... 00:23:14.960 27712.00 IOPS, 108.25 MiB/s [2024-12-09T23:04:43.228Z] 28160.00 IOPS, 110.00 MiB/s [2024-12-09T23:04:44.165Z] 28586.67 IOPS, 111.67 MiB/s [2024-12-09T23:04:45.100Z] 29968.00 IOPS, 117.06 MiB/s 00:23:17.764 Latency(us) 00:23:17.764 [2024-12-09T23:04:45.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.764 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:23:17.764 xnvme_bdev : 5.00 29783.32 116.34 0.00 0.00 2142.29 1158.07 7422.15 00:23:17.764 [2024-12-09T23:04:45.100Z] =================================================================================================================== 00:23:17.764 [2024-12-09T23:04:45.100Z] Total : 29783.32 116.34 0.00 0.00 2142.29 1158.07 7422.15 00:23:19.137 23:04:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:19.137 23:04:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:23:19.137 23:04:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:19.137 23:04:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:19.137 23:04:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:19.137 { 00:23:19.137 "subsystems": [ 00:23:19.137 { 00:23:19.137 "subsystem": "bdev", 00:23:19.137 "config": [ 00:23:19.137 { 00:23:19.137 "params": { 00:23:19.137 "io_mechanism": "io_uring_cmd", 00:23:19.137 "conserve_cpu": false, 00:23:19.137 "filename": "/dev/ng0n1", 00:23:19.137 "name": "xnvme_bdev" 00:23:19.137 }, 00:23:19.137 "method": "bdev_xnvme_create" 00:23:19.137 }, 00:23:19.137 { 00:23:19.137 "method": "bdev_wait_for_examine" 00:23:19.137 } 00:23:19.137 ] 00:23:19.137 } 00:23:19.137 ] 00:23:19.137 } 00:23:19.137 [2024-12-09 23:04:46.154377] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:19.137 [2024-12-09 23:04:46.154542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:23:19.137 [2024-12-09 23:04:46.338500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.137 [2024-12-09 23:04:46.462236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.704 Running I/O for 5 seconds... 00:23:21.598 70656.00 IOPS, 276.00 MiB/s [2024-12-09T23:04:49.869Z] 69696.00 IOPS, 272.25 MiB/s [2024-12-09T23:04:51.253Z] 69269.33 IOPS, 270.58 MiB/s [2024-12-09T23:04:52.210Z] 69584.00 IOPS, 271.81 MiB/s [2024-12-09T23:04:52.210Z] 69824.00 IOPS, 272.75 MiB/s 00:23:24.874 Latency(us) 00:23:24.874 [2024-12-09T23:04:52.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.874 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:23:24.874 xnvme_bdev : 5.00 69799.47 272.65 0.00 0.00 914.14 697.47 3895.31 00:23:24.874 [2024-12-09T23:04:52.210Z] =================================================================================================================== 00:23:24.874 [2024-12-09T23:04:52.210Z] Total : 69799.47 272.65 0.00 0.00 914.14 697.47 3895.31 00:23:25.835 23:04:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:25.835 23:04:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:23:25.835 23:04:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:25.835 23:04:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:25.835 23:04:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:25.835 { 00:23:25.835 "subsystems": [ 00:23:25.835 { 00:23:25.835 "subsystem": "bdev", 00:23:25.835 "config": [ 00:23:25.835 { 00:23:25.835 "params": { 00:23:25.835 "io_mechanism": "io_uring_cmd", 00:23:25.835 "conserve_cpu": false, 00:23:25.835 "filename": "/dev/ng0n1", 00:23:25.835 "name": "xnvme_bdev" 00:23:25.835 }, 00:23:25.835 "method": "bdev_xnvme_create" 00:23:25.835 }, 00:23:25.835 { 00:23:25.835 "method": "bdev_wait_for_examine" 00:23:25.835 } 00:23:25.835 ] 00:23:25.835 } 00:23:25.835 ] 00:23:25.835 } 00:23:25.835 [2024-12-09 23:04:53.152429] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:25.835 [2024-12-09 23:04:53.152594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73001 ] 00:23:26.121 [2024-12-09 23:04:53.340730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.380 [2024-12-09 23:04:53.464318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.639 Running I/O for 5 seconds... 00:23:28.518 66160.00 IOPS, 258.44 MiB/s [2024-12-09T23:04:57.241Z] 63467.00 IOPS, 247.92 MiB/s [2024-12-09T23:04:58.175Z] 49311.00 IOPS, 192.62 MiB/s [2024-12-09T23:04:59.113Z] 40064.25 IOPS, 156.50 MiB/s 00:23:31.777 Latency(us) 00:23:31.777 [2024-12-09T23:04:59.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.777 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:23:31.777 xnvme_bdev : 5.00 38213.55 149.27 0.00 0.00 1670.87 68.27 38532.01 00:23:31.777 [2024-12-09T23:04:59.113Z] =================================================================================================================== 00:23:31.777 [2024-12-09T23:04:59.113Z] Total : 38213.55 149.27 0.00 0.00 1670.87 68.27 38532.01 00:23:32.713 00:23:32.713 real 0m27.951s 00:23:32.713 user 0m14.183s 00:23:32.713 sys 0m13.355s 00:23:32.971 23:05:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.971 23:05:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:32.971 ************************************ 00:23:32.971 END TEST xnvme_bdevperf 00:23:32.971 ************************************ 00:23:32.971 23:05:00 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:23:32.971 23:05:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.971 23:05:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.971 23:05:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:32.971 ************************************ 00:23:32.971 START TEST xnvme_fio_plugin 00:23:32.971 ************************************ 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:32.971 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:32.972 23:05:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:32.972 { 00:23:32.972 "subsystems": [ 00:23:32.972 { 00:23:32.972 "subsystem": "bdev", 00:23:32.972 "config": [ 00:23:32.972 { 00:23:32.972 "params": { 00:23:32.972 "io_mechanism": "io_uring_cmd", 00:23:32.972 "conserve_cpu": false, 00:23:32.972 "filename": "/dev/ng0n1", 00:23:32.972 "name": "xnvme_bdev" 00:23:32.972 }, 00:23:32.972 "method": "bdev_xnvme_create" 00:23:32.972 }, 00:23:32.972 { 00:23:32.972 "method": "bdev_wait_for_examine" 00:23:32.972 } 00:23:32.972 ] 00:23:32.972 } 00:23:32.972 ] 00:23:32.972 } 00:23:33.231 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:23:33.231 fio-3.35 00:23:33.231 Starting 1 thread 00:23:39.805 00:23:39.805 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73125: Mon Dec 9 23:05:06 2024 00:23:39.805 read: IOPS=28.8k, BW=112MiB/s (118MB/s)(562MiB/5002msec) 00:23:39.805 slat (usec): min=3, max=157, avg= 6.35, stdev= 2.55 00:23:39.805 clat (usec): min=1169, max=8306, avg=1973.32, stdev=331.11 00:23:39.805 lat (usec): min=1174, max=8315, avg=1979.67, stdev=331.90 00:23:39.805 clat percentiles (usec): 00:23:39.805 | 1.00th=[ 1418], 5.00th=[ 1565], 10.00th=[ 1631], 20.00th=[ 1729], 00:23:39.805 | 30.00th=[ 1795], 40.00th=[ 1860], 50.00th=[ 1926], 60.00th=[ 1991], 00:23:39.805 | 70.00th=[ 2089], 80.00th=[ 2212], 90.00th=[ 2376], 95.00th=[ 2540], 00:23:39.805 | 99.00th=[ 2868], 99.50th=[ 3064], 99.90th=[ 4015], 99.95th=[ 4424], 00:23:39.805 | 99.99th=[ 8225] 00:23:39.805 bw ( KiB/s): min=95744, max=125952, per=98.99%, avg=113834.67, stdev=9748.19, samples=9 00:23:39.805 iops : min=23936, max=31488, avg=28458.67, stdev=2437.05, samples=9 00:23:39.805 lat (msec) : 2=60.46%, 4=39.44%, 10=0.11% 00:23:39.805 cpu : usr=35.37%, sys=63.45%, ctx=12, majf=0, minf=762 00:23:39.805 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:23:39.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:39.805 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:23:39.805 issued rwts: total=143808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:39.805 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:39.805 00:23:39.805 Run status group 0 (all jobs): 00:23:39.805 READ: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=562MiB (589MB), run=5002-5002msec 00:23:40.372 ----------------------------------------------------- 00:23:40.372 Suppressions used: 00:23:40.372 count bytes template 00:23:40.372 1 11 /usr/src/fio/parse.c 00:23:40.372 1 8 libtcmalloc_minimal.so 00:23:40.372 1 904 libcrypto.so 00:23:40.372 ----------------------------------------------------- 00:23:40.372 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:40.372 { 00:23:40.372 "subsystems": [ 00:23:40.372 { 00:23:40.372 "subsystem": "bdev", 00:23:40.372 "config": [ 00:23:40.372 { 00:23:40.372 "params": { 00:23:40.372 "io_mechanism": "io_uring_cmd", 00:23:40.372 "conserve_cpu": false, 00:23:40.372 "filename": "/dev/ng0n1", 00:23:40.372 "name": "xnvme_bdev" 00:23:40.372 }, 00:23:40.372 "method": "bdev_xnvme_create" 00:23:40.372 }, 00:23:40.372 { 00:23:40.372 "method": "bdev_wait_for_examine" 00:23:40.372 } 00:23:40.372 ] 00:23:40.372 } 00:23:40.372 ] 00:23:40.372 } 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:40.372 23:05:07 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:23:40.631 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:23:40.631 fio-3.35 00:23:40.631 Starting 1 thread 00:23:47.215 00:23:47.215 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73217: Mon Dec 9 23:05:13 2024 00:23:47.215 write: IOPS=33.0k, BW=129MiB/s (135MB/s)(645MiB/5001msec); 0 zone resets 00:23:47.215 slat (usec): min=2, max=387, avg= 5.38, stdev= 3.81 00:23:47.215 clat (usec): min=205, max=50662, avg=1725.66, stdev=1104.14 00:23:47.215 lat (usec): min=216, max=50670, avg=1731.03, stdev=1104.74 00:23:47.215 clat percentiles (usec): 00:23:47.215 | 1.00th=[ 898], 5.00th=[ 1020], 10.00th=[ 1106], 20.00th=[ 1254], 00:23:47.215 | 30.00th=[ 1418], 40.00th=[ 1565], 50.00th=[ 1713], 60.00th=[ 1844], 00:23:47.215 | 70.00th=[ 1942], 80.00th=[ 2073], 90.00th=[ 2245], 95.00th=[ 2376], 00:23:47.215 | 99.00th=[ 2802], 99.50th=[ 3032], 99.90th=[13042], 99.95th=[19268], 00:23:47.215 | 99.99th=[49546] 00:23:47.215 bw ( KiB/s): min=107832, max=164176, per=100.00%, avg=134764.44, stdev=18471.59, samples=9 00:23:47.215 iops : min=26958, max=41044, avg=33691.11, stdev=4617.90, samples=9 00:23:47.215 lat (usec) : 250=0.01%, 500=0.01%, 750=0.06%, 1000=3.98% 00:23:47.215 lat (msec) : 2=70.37%, 4=25.41%, 10=0.06%, 20=0.08%, 50=0.04% 00:23:47.215 lat (msec) : 100=0.01% 00:23:47.215 cpu : usr=34.54%, sys=63.58%, ctx=56, majf=0, minf=763 00:23:47.215 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.7%, 32=50.6%, >=64=1.6% 00:23:47.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.215 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:23:47.215 issued rwts: total=0,165082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:47.215 00:23:47.215 Run status group 0 (all jobs): 00:23:47.215 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=645MiB (676MB), run=5001-5001msec 00:23:48.150 ----------------------------------------------------- 00:23:48.150 Suppressions used: 00:23:48.150 count bytes template 00:23:48.150 1 11 /usr/src/fio/parse.c 00:23:48.150 1 8 libtcmalloc_minimal.so 00:23:48.150 1 904 libcrypto.so 00:23:48.150 ----------------------------------------------------- 00:23:48.150 00:23:48.150 00:23:48.150 real 0m15.079s 00:23:48.150 user 0m7.470s 00:23:48.150 sys 0m7.202s 00:23:48.150 23:05:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.150 23:05:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:23:48.150 ************************************ 00:23:48.150 END TEST xnvme_fio_plugin 00:23:48.150 ************************************ 00:23:48.150 23:05:15 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:23:48.150 23:05:15 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:23:48.150 23:05:15 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:23:48.150 23:05:15 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:23:48.150 23:05:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:48.150 23:05:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.150 23:05:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:48.150 ************************************ 00:23:48.150 START TEST xnvme_rpc 00:23:48.150 ************************************ 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73309 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73309 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73309 ']' 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.150 23:05:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.151 23:05:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.151 23:05:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.151 23:05:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:48.151 [2024-12-09 23:05:15.397821] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:48.151 [2024-12-09 23:05:15.398203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73309 ] 00:23:48.409 [2024-12-09 23:05:15.583743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.409 [2024-12-09 23:05:15.710146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:49.345 xnvme_bdev 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.345 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73309 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73309 ']' 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73309 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73309 00:23:49.604 killing process with pid 73309 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73309' 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73309 00:23:49.604 23:05:16 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73309 00:23:52.156 00:23:52.156 real 0m4.146s 00:23:52.156 user 0m4.195s 00:23:52.156 sys 0m0.636s 00:23:52.156 23:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.156 23:05:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:52.156 ************************************ 00:23:52.156 END TEST xnvme_rpc 00:23:52.156 ************************************ 00:23:52.414 23:05:19 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:23:52.414 23:05:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:52.414 23:05:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.414 23:05:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:52.414 ************************************ 00:23:52.414 START TEST xnvme_bdevperf 00:23:52.414 ************************************ 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:52.414 23:05:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:52.414 { 00:23:52.414 "subsystems": [ 00:23:52.414 { 00:23:52.414 "subsystem": "bdev", 00:23:52.414 "config": [ 00:23:52.414 { 00:23:52.414 "params": { 00:23:52.414 "io_mechanism": "io_uring_cmd", 00:23:52.414 "conserve_cpu": true, 00:23:52.414 "filename": "/dev/ng0n1", 00:23:52.414 "name": "xnvme_bdev" 00:23:52.414 }, 00:23:52.414 "method": "bdev_xnvme_create" 00:23:52.414 }, 00:23:52.414 { 00:23:52.414 "method": "bdev_wait_for_examine" 00:23:52.414 } 00:23:52.414 ] 00:23:52.414 } 00:23:52.414 ] 00:23:52.414 } 00:23:52.414 [2024-12-09 23:05:19.607412] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:52.414 [2024-12-09 23:05:19.607568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73395 ] 00:23:52.672 [2024-12-09 23:05:19.793149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.672 [2024-12-09 23:05:19.915186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.239 Running I/O for 5 seconds... 00:23:55.111 38400.00 IOPS, 150.00 MiB/s [2024-12-09T23:05:23.381Z] 38624.00 IOPS, 150.88 MiB/s [2024-12-09T23:05:24.317Z] 38292.00 IOPS, 149.58 MiB/s [2024-12-09T23:05:25.698Z] 38734.75 IOPS, 151.31 MiB/s 00:23:58.362 Latency(us) 00:23:58.362 [2024-12-09T23:05:25.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.362 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:23:58.362 xnvme_bdev : 5.00 38531.74 150.51 0.00 0.00 1656.27 287.87 9001.33 00:23:58.362 [2024-12-09T23:05:25.698Z] =================================================================================================================== 00:23:58.362 [2024-12-09T23:05:25.698Z] Total : 38531.74 150.51 0.00 0.00 1656.27 287.87 9001.33 00:23:59.296 23:05:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:23:59.296 23:05:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:23:59.296 23:05:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:23:59.296 23:05:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:23:59.296 23:05:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:59.296 { 00:23:59.296 "subsystems": [ 00:23:59.296 { 00:23:59.296 "subsystem": "bdev", 00:23:59.296 "config": [ 00:23:59.296 { 00:23:59.296 "params": { 00:23:59.296 "io_mechanism": "io_uring_cmd", 00:23:59.296 "conserve_cpu": true, 00:23:59.296 "filename": "/dev/ng0n1", 00:23:59.296 "name": "xnvme_bdev" 00:23:59.296 }, 00:23:59.296 "method": "bdev_xnvme_create" 00:23:59.296 }, 00:23:59.296 { 00:23:59.296 "method": "bdev_wait_for_examine" 00:23:59.296 } 00:23:59.296 ] 00:23:59.296 } 00:23:59.296 ] 00:23:59.296 } 00:23:59.296 [2024-12-09 23:05:26.606181] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:23:59.296 [2024-12-09 23:05:26.606328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73472 ] 00:23:59.554 [2024-12-09 23:05:26.792489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.814 [2024-12-09 23:05:26.919594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.073 Running I/O for 5 seconds... 00:24:02.385 35264.00 IOPS, 137.75 MiB/s [2024-12-09T23:05:30.705Z] 35899.00 IOPS, 140.23 MiB/s [2024-12-09T23:05:31.641Z] 34192.33 IOPS, 133.56 MiB/s [2024-12-09T23:05:32.577Z] 33116.25 IOPS, 129.36 MiB/s 00:24:05.241 Latency(us) 00:24:05.241 [2024-12-09T23:05:32.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.241 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:24:05.241 xnvme_bdev : 5.00 32096.29 125.38 0.00 0.00 1987.82 60.04 27161.91 00:24:05.241 [2024-12-09T23:05:32.577Z] =================================================================================================================== 00:24:05.241 [2024-12-09T23:05:32.577Z] Total : 32096.29 125.38 0.00 0.00 1987.82 60.04 27161.91 00:24:06.176 23:05:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:06.176 23:05:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:24:06.176 23:05:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:06.176 23:05:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:06.176 23:05:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.435 { 00:24:06.435 "subsystems": [ 00:24:06.435 { 00:24:06.435 "subsystem": "bdev", 00:24:06.435 "config": [ 00:24:06.435 { 00:24:06.435 "params": { 00:24:06.435 "io_mechanism": "io_uring_cmd", 00:24:06.435 "conserve_cpu": true, 00:24:06.435 "filename": "/dev/ng0n1", 00:24:06.435 "name": "xnvme_bdev" 00:24:06.435 }, 00:24:06.435 "method": "bdev_xnvme_create" 00:24:06.435 }, 00:24:06.435 { 00:24:06.435 "method": "bdev_wait_for_examine" 00:24:06.435 } 00:24:06.435 ] 00:24:06.435 } 00:24:06.435 ] 00:24:06.435 } 00:24:06.435 [2024-12-09 23:05:33.599048] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:24:06.435 [2024-12-09 23:05:33.599194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73549 ] 00:24:06.694 [2024-12-09 23:05:33.784204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.694 [2024-12-09 23:05:33.910222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.261 Running I/O for 5 seconds... 00:24:09.139 69056.00 IOPS, 269.75 MiB/s [2024-12-09T23:05:37.411Z] 68896.00 IOPS, 269.12 MiB/s [2024-12-09T23:05:38.346Z] 68693.33 IOPS, 268.33 MiB/s [2024-12-09T23:05:39.728Z] 68272.00 IOPS, 266.69 MiB/s [2024-12-09T23:05:39.728Z] 68224.00 IOPS, 266.50 MiB/s 00:24:12.392 Latency(us) 00:24:12.392 [2024-12-09T23:05:39.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.392 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:24:12.392 xnvme_bdev : 5.00 68210.85 266.45 0.00 0.00 935.32 375.06 2750.41 00:24:12.392 [2024-12-09T23:05:39.728Z] =================================================================================================================== 00:24:12.392 [2024-12-09T23:05:39.728Z] Total : 68210.85 266.45 0.00 0.00 935.32 375.06 2750.41 00:24:13.331 23:05:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:13.331 23:05:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:24:13.331 23:05:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:24:13.331 23:05:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:24:13.331 23:05:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:13.331 { 00:24:13.331 "subsystems": [ 00:24:13.331 { 00:24:13.331 "subsystem": "bdev", 00:24:13.331 "config": [ 00:24:13.331 { 00:24:13.331 "params": { 00:24:13.331 "io_mechanism": "io_uring_cmd", 00:24:13.331 "conserve_cpu": true, 00:24:13.331 "filename": "/dev/ng0n1", 00:24:13.331 "name": "xnvme_bdev" 00:24:13.331 }, 00:24:13.331 "method": "bdev_xnvme_create" 00:24:13.331 }, 00:24:13.331 { 00:24:13.331 "method": "bdev_wait_for_examine" 00:24:13.331 } 00:24:13.331 ] 00:24:13.331 } 00:24:13.331 ] 00:24:13.331 } 00:24:13.331 [2024-12-09 23:05:40.631347] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:24:13.331 [2024-12-09 23:05:40.631507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73629 ] 00:24:13.592 [2024-12-09 23:05:40.817341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.852 [2024-12-09 23:05:40.936690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.110 Running I/O for 5 seconds... 00:24:16.421 38006.00 IOPS, 148.46 MiB/s [2024-12-09T23:05:44.693Z] 37882.00 IOPS, 147.98 MiB/s [2024-12-09T23:05:45.635Z] 35786.00 IOPS, 139.79 MiB/s [2024-12-09T23:05:46.570Z] 36467.50 IOPS, 142.45 MiB/s [2024-12-09T23:05:46.570Z] 36434.20 IOPS, 142.32 MiB/s 00:24:19.234 Latency(us) 00:24:19.234 [2024-12-09T23:05:46.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.234 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:24:19.234 xnvme_bdev : 5.01 36384.03 142.13 0.00 0.00 1752.58 60.86 28425.25 00:24:19.234 [2024-12-09T23:05:46.570Z] =================================================================================================================== 00:24:19.234 [2024-12-09T23:05:46.570Z] Total : 36384.03 142.13 0.00 0.00 1752.58 60.86 28425.25 00:24:20.620 00:24:20.620 real 0m28.043s 00:24:20.620 user 0m17.817s 00:24:20.620 sys 0m8.715s 00:24:20.620 23:05:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:20.620 23:05:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.621 ************************************ 00:24:20.621 END TEST xnvme_bdevperf 00:24:20.621 ************************************ 00:24:20.621 23:05:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:24:20.621 23:05:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:20.621 23:05:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:20.621 23:05:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:20.621 ************************************ 00:24:20.621 START TEST xnvme_fio_plugin 00:24:20.621 ************************************ 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:20.621 { 00:24:20.621 "subsystems": [ 00:24:20.621 { 00:24:20.621 "subsystem": "bdev", 00:24:20.621 "config": [ 00:24:20.621 { 00:24:20.621 "params": { 00:24:20.621 "io_mechanism": "io_uring_cmd", 00:24:20.621 "conserve_cpu": true, 00:24:20.621 "filename": "/dev/ng0n1", 00:24:20.621 "name": "xnvme_bdev" 00:24:20.621 }, 00:24:20.621 "method": "bdev_xnvme_create" 00:24:20.621 }, 00:24:20.621 { 00:24:20.621 "method": "bdev_wait_for_examine" 00:24:20.621 } 00:24:20.621 ] 00:24:20.621 } 00:24:20.621 ] 00:24:20.621 } 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:20.621 23:05:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:20.621 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:20.621 fio-3.35 00:24:20.621 Starting 1 thread 00:24:27.191 00:24:27.191 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73764: Mon Dec 9 23:05:53 2024 00:24:27.191 read: IOPS=32.4k, BW=127MiB/s (133MB/s)(634MiB/5001msec) 00:24:27.191 slat (usec): min=2, max=141, avg= 5.52, stdev= 2.36 00:24:27.191 clat (usec): min=807, max=6258, avg=1752.42, stdev=348.50 00:24:27.191 lat (usec): min=810, max=6267, avg=1757.94, stdev=349.56 00:24:27.191 clat percentiles (usec): 00:24:27.191 | 1.00th=[ 1045], 5.00th=[ 1188], 10.00th=[ 1287], 20.00th=[ 1450], 00:24:27.191 | 30.00th=[ 1582], 40.00th=[ 1680], 50.00th=[ 1762], 60.00th=[ 1844], 00:24:27.191 | 70.00th=[ 1926], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2278], 00:24:27.191 | 99.00th=[ 2606], 99.50th=[ 2769], 99.90th=[ 3294], 99.95th=[ 3523], 00:24:27.191 | 99.99th=[ 6128] 00:24:27.191 bw ( KiB/s): min=114688, max=153600, per=98.86%, avg=128284.44, stdev=12528.61, samples=9 00:24:27.191 iops : min=28672, max=38400, avg=32071.11, stdev=3132.15, samples=9 00:24:27.191 lat (usec) : 1000=0.48% 00:24:27.191 lat (msec) : 2=77.85%, 4=21.63%, 10=0.04% 00:24:27.191 cpu : usr=52.26%, sys=45.00%, ctx=10, majf=0, minf=762 00:24:27.191 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:24:27.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.191 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:24:27.191 issued rwts: total=162240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.191 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:27.191 00:24:27.191 Run status group 0 (all jobs): 00:24:27.191 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=634MiB (665MB), run=5001-5001msec 00:24:28.138 ----------------------------------------------------- 00:24:28.138 Suppressions used: 00:24:28.138 count bytes template 00:24:28.138 1 11 /usr/src/fio/parse.c 00:24:28.138 1 8 libtcmalloc_minimal.so 00:24:28.138 1 904 libcrypto.so 00:24:28.138 ----------------------------------------------------- 00:24:28.138 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:28.138 23:05:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:24:28.138 { 00:24:28.138 "subsystems": [ 00:24:28.138 { 00:24:28.138 "subsystem": "bdev", 00:24:28.138 "config": [ 00:24:28.138 { 00:24:28.138 "params": { 00:24:28.138 "io_mechanism": "io_uring_cmd", 00:24:28.138 "conserve_cpu": true, 00:24:28.138 "filename": "/dev/ng0n1", 00:24:28.138 "name": "xnvme_bdev" 00:24:28.138 }, 00:24:28.138 "method": "bdev_xnvme_create" 00:24:28.138 }, 00:24:28.138 { 00:24:28.138 "method": "bdev_wait_for_examine" 00:24:28.138 } 00:24:28.138 ] 00:24:28.138 } 00:24:28.138 ] 00:24:28.138 } 00:24:28.138 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:24:28.138 fio-3.35 00:24:28.138 Starting 1 thread 00:24:34.729 00:24:34.729 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73855: Mon Dec 9 23:06:01 2024 00:24:34.729 write: IOPS=32.9k, BW=128MiB/s (135MB/s)(642MiB/5001msec); 0 zone resets 00:24:34.729 slat (nsec): min=2400, max=84245, avg=5546.95, stdev=2605.75 00:24:34.729 clat (usec): min=815, max=7617, avg=1724.99, stdev=375.05 00:24:34.729 lat (usec): min=818, max=7622, avg=1730.54, stdev=376.38 00:24:34.729 clat percentiles (usec): 00:24:34.729 | 1.00th=[ 979], 5.00th=[ 1123], 10.00th=[ 1237], 20.00th=[ 1434], 00:24:34.729 | 30.00th=[ 1532], 40.00th=[ 1614], 50.00th=[ 1696], 60.00th=[ 1795], 00:24:34.729 | 70.00th=[ 1893], 80.00th=[ 2024], 90.00th=[ 2212], 95.00th=[ 2376], 00:24:34.729 | 99.00th=[ 2737], 99.50th=[ 2900], 99.90th=[ 3261], 99.95th=[ 3359], 00:24:34.729 | 99.99th=[ 3458] 00:24:34.729 bw ( KiB/s): min=114688, max=140288, per=99.20%, avg=130501.33, stdev=8826.45, samples=9 00:24:34.729 iops : min=28672, max=35072, avg=32625.33, stdev=2206.61, samples=9 00:24:34.729 lat (usec) : 1000=1.30% 00:24:34.729 lat (msec) : 2=76.53%, 4=22.16%, 10=0.01% 00:24:34.729 cpu : usr=52.50%, sys=44.84%, ctx=9, majf=0, minf=763 00:24:34.729 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:24:34.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.729 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:24:34.729 issued rwts: total=0,164478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.729 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:34.729 00:24:34.729 Run status group 0 (all jobs): 00:24:34.729 WRITE: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=642MiB (674MB), run=5001-5001msec 00:24:35.666 ----------------------------------------------------- 00:24:35.666 Suppressions used: 00:24:35.666 count bytes template 00:24:35.666 1 11 /usr/src/fio/parse.c 00:24:35.666 1 8 libtcmalloc_minimal.so 00:24:35.666 1 904 libcrypto.so 00:24:35.666 ----------------------------------------------------- 00:24:35.666 00:24:35.666 00:24:35.666 real 0m15.049s 00:24:35.666 user 0m9.198s 00:24:35.666 sys 0m5.342s 00:24:35.666 ************************************ 00:24:35.666 END TEST xnvme_fio_plugin 00:24:35.666 ************************************ 00:24:35.666 23:06:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.666 23:06:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:24:35.666 23:06:02 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73309 00:24:35.666 23:06:02 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73309 ']' 00:24:35.666 23:06:02 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73309 00:24:35.666 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73309) - No such process 00:24:35.666 Process with pid 73309 is not found 00:24:35.666 23:06:02 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73309 is not found' 00:24:35.666 ************************************ 00:24:35.666 END TEST nvme_xnvme 00:24:35.666 ************************************ 00:24:35.666 00:24:35.666 real 3m56.875s 00:24:35.666 user 2m9.942s 00:24:35.666 sys 1m30.750s 00:24:35.666 23:06:02 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.666 23:06:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:35.666 23:06:02 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:35.666 23:06:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.666 23:06:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.666 23:06:02 -- common/autotest_common.sh@10 -- # set +x 00:24:35.666 ************************************ 00:24:35.666 START TEST blockdev_xnvme 00:24:35.666 ************************************ 00:24:35.666 23:06:02 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:24:35.666 * Looking for test storage... 00:24:35.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:35.666 23:06:02 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:35.666 23:06:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:24:35.666 23:06:02 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.927 23:06:03 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:35.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.927 --rc genhtml_branch_coverage=1 00:24:35.927 --rc genhtml_function_coverage=1 00:24:35.927 --rc genhtml_legend=1 00:24:35.927 --rc geninfo_all_blocks=1 00:24:35.927 --rc geninfo_unexecuted_blocks=1 00:24:35.927 00:24:35.927 ' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:35.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.927 --rc genhtml_branch_coverage=1 00:24:35.927 --rc genhtml_function_coverage=1 00:24:35.927 --rc genhtml_legend=1 00:24:35.927 --rc geninfo_all_blocks=1 00:24:35.927 --rc geninfo_unexecuted_blocks=1 00:24:35.927 00:24:35.927 ' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:35.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.927 --rc genhtml_branch_coverage=1 00:24:35.927 --rc genhtml_function_coverage=1 00:24:35.927 --rc genhtml_legend=1 00:24:35.927 --rc geninfo_all_blocks=1 00:24:35.927 --rc geninfo_unexecuted_blocks=1 00:24:35.927 00:24:35.927 ' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:35.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.927 --rc genhtml_branch_coverage=1 00:24:35.927 --rc genhtml_function_coverage=1 00:24:35.927 --rc genhtml_legend=1 00:24:35.927 --rc geninfo_all_blocks=1 00:24:35.927 --rc geninfo_unexecuted_blocks=1 00:24:35.927 00:24:35.927 ' 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73995 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73995 00:24:35.927 23:06:03 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73995 ']' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.927 23:06:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:35.927 [2024-12-09 23:06:03.195428] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:24:35.927 [2024-12-09 23:06:03.195584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73995 ] 00:24:36.187 [2024-12-09 23:06:03.379478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.187 [2024-12-09 23:06:03.506168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.620 23:06:04 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.620 23:06:04 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:24:37.620 23:06:04 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:24:37.620 23:06:04 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:24:37.620 23:06:04 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:24:37.620 23:06:04 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:24:37.620 23:06:04 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:37.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:38.856 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:38.856 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:38.856 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:24:38.856 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:24:38.856 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:38.856 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:24:38.857 nvme0n1 00:24:38.857 nvme0n2 00:24:38.857 nvme0n3 00:24:38.857 nvme1n1 00:24:38.857 nvme2n1 00:24:38.857 nvme3n1 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.857 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.857 23:06:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.115 23:06:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.115 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:24:39.115 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:24:39.115 23:06:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.115 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:24:39.115 23:06:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:39.115 23:06:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.115 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:24:39.115 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:24:39.116 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "87edb157-b0bb-4312-99b3-2fdc4c2f4e4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "87edb157-b0bb-4312-99b3-2fdc4c2f4e4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "392b65c4-287e-43b3-a434-0d328c8fc3b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "392b65c4-287e-43b3-a434-0d328c8fc3b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "81d59512-296c-48e6-884f-f70fb7aef400"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81d59512-296c-48e6-884f-f70fb7aef400",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d5f264b7-bce8-4489-8aa3-cdf70a3ed340"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d5f264b7-bce8-4489-8aa3-cdf70a3ed340",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "22dce233-940c-4f23-8614-f6eb290ddcb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "22dce233-940c-4f23-8614-f6eb290ddcb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "aeb31623-b1e1-4a5d-a345-11ac312c04c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "aeb31623-b1e1-4a5d-a345-11ac312c04c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:24:39.116 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:24:39.116 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:24:39.116 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:24:39.116 23:06:06 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73995 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73995 ']' 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73995 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73995 00:24:39.116 killing process with pid 73995 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73995' 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73995 00:24:39.116 23:06:06 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73995 00:24:41.652 23:06:08 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:41.652 23:06:08 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:41.652 23:06:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:41.652 23:06:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.652 23:06:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:41.652 ************************************ 00:24:41.652 START TEST bdev_hello_world 00:24:41.652 ************************************ 00:24:41.652 23:06:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:24:41.912 [2024-12-09 23:06:08.991274] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:24:41.912 [2024-12-09 23:06:08.991427] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74300 ] 00:24:41.912 [2024-12-09 23:06:09.177399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.170 [2024-12-09 23:06:09.309905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.739 [2024-12-09 23:06:09.823438] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:42.739 [2024-12-09 23:06:09.823523] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:24:42.739 [2024-12-09 23:06:09.823548] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:42.739 [2024-12-09 23:06:09.825834] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:42.739 [2024-12-09 23:06:09.826342] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:42.739 [2024-12-09 23:06:09.826370] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:42.739 [2024-12-09 23:06:09.826591] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:42.739 00:24:42.739 [2024-12-09 23:06:09.826616] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:43.674 00:24:43.674 real 0m2.115s 00:24:43.674 user 0m1.709s 00:24:43.674 sys 0m0.286s 00:24:43.674 ************************************ 00:24:43.674 END TEST bdev_hello_world 00:24:43.674 ************************************ 00:24:43.674 23:06:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.674 23:06:11 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:43.933 23:06:11 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:24:43.933 23:06:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.933 23:06:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.933 23:06:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:43.933 ************************************ 00:24:43.933 START TEST bdev_bounds 00:24:43.933 ************************************ 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74338 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:43.933 Process bdevio pid: 74338 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74338' 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74338 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74338 ']' 00:24:43.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.933 23:06:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:43.933 [2024-12-09 23:06:11.186999] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:24:43.933 [2024-12-09 23:06:11.187179] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74338 ] 00:24:44.192 [2024-12-09 23:06:11.371002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.192 [2024-12-09 23:06:11.504415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.192 [2024-12-09 23:06:11.504570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.192 [2024-12-09 23:06:11.504599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.759 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.759 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:24:44.759 23:06:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:45.017 I/O targets: 00:24:45.017 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:45.017 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:45.017 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:45.017 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:24:45.017 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:24:45.017 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:24:45.017 00:24:45.017 00:24:45.017 CUnit - A unit testing framework for C - Version 2.1-3 00:24:45.017 http://cunit.sourceforge.net/ 00:24:45.017 00:24:45.017 00:24:45.017 Suite: bdevio tests on: nvme3n1 00:24:45.017 Test: blockdev write read block ...passed 00:24:45.017 Test: blockdev write zeroes read block ...passed 00:24:45.017 Test: blockdev write zeroes read no split ...passed 00:24:45.017 Test: blockdev write zeroes read split ...passed 00:24:45.017 Test: blockdev write zeroes read split partial ...passed 00:24:45.017 Test: blockdev reset ...passed 00:24:45.017 Test: blockdev write read 8 blocks ...passed 00:24:45.017 Test: blockdev write read size > 128k ...passed 00:24:45.017 Test: blockdev write read invalid size ...passed 00:24:45.017 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:45.017 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:45.017 Test: blockdev write read max offset ...passed 00:24:45.017 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:45.017 Test: blockdev writev readv 8 blocks ...passed 00:24:45.017 Test: blockdev writev readv 30 x 1block ...passed 00:24:45.017 Test: blockdev writev readv block ...passed 00:24:45.017 Test: blockdev writev readv size > 128k ...passed 00:24:45.017 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:45.018 Test: blockdev comparev and writev ...passed 00:24:45.018 Test: blockdev nvme passthru rw ...passed 00:24:45.018 Test: blockdev nvme passthru vendor specific ...passed 00:24:45.018 Test: blockdev nvme admin passthru ...passed 00:24:45.018 Test: blockdev copy ...passed 00:24:45.018 Suite: bdevio tests on: nvme2n1 00:24:45.018 Test: blockdev write read block ...passed 00:24:45.018 Test: blockdev write zeroes read block ...passed 00:24:45.018 Test: blockdev write zeroes read no split ...passed 00:24:45.018 Test: blockdev write zeroes read split ...passed 00:24:45.018 Test: blockdev write zeroes read split partial ...passed 00:24:45.018 Test: blockdev reset ...passed 00:24:45.018 Test: blockdev write read 8 blocks ...passed 00:24:45.018 Test: blockdev write read size > 128k ...passed 00:24:45.018 Test: blockdev write read invalid size ...passed 00:24:45.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:45.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:45.018 Test: blockdev write read max offset ...passed 00:24:45.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:45.018 Test: blockdev writev readv 8 blocks ...passed 00:24:45.018 Test: blockdev writev readv 30 x 1block ...passed 00:24:45.018 Test: blockdev writev readv block ...passed 00:24:45.018 Test: blockdev writev readv size > 128k ...passed 00:24:45.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:45.018 Test: blockdev comparev and writev ...passed 00:24:45.018 Test: blockdev nvme passthru rw ...passed 00:24:45.018 Test: blockdev nvme passthru vendor specific ...passed 00:24:45.018 Test: blockdev nvme admin passthru ...passed 00:24:45.018 Test: blockdev copy ...passed 00:24:45.018 Suite: bdevio tests on: nvme1n1 00:24:45.018 Test: blockdev write read block ...passed 00:24:45.018 Test: blockdev write zeroes read block ...passed 00:24:45.018 Test: blockdev write zeroes read no split ...passed 00:24:45.329 Test: blockdev write zeroes read split ...passed 00:24:45.329 Test: blockdev write zeroes read split partial ...passed 00:24:45.329 Test: blockdev reset ...passed 00:24:45.329 Test: blockdev write read 8 blocks ...passed 00:24:45.329 Test: blockdev write read size > 128k ...passed 00:24:45.329 Test: blockdev write read invalid size ...passed 00:24:45.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:45.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:45.329 Test: blockdev write read max offset ...passed 00:24:45.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:45.329 Test: blockdev writev readv 8 blocks ...passed 00:24:45.329 Test: blockdev writev readv 30 x 1block ...passed 00:24:45.329 Test: blockdev writev readv block ...passed 00:24:45.329 Test: blockdev writev readv size > 128k ...passed 00:24:45.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:45.329 Test: blockdev comparev and writev ...passed 00:24:45.329 Test: blockdev nvme passthru rw ...passed 00:24:45.329 Test: blockdev nvme passthru vendor specific ...passed 00:24:45.329 Test: blockdev nvme admin passthru ...passed 00:24:45.329 Test: blockdev copy ...passed 00:24:45.329 Suite: bdevio tests on: nvme0n3 00:24:45.329 Test: blockdev write read block ...passed 00:24:45.329 Test: blockdev write zeroes read block ...passed 00:24:45.329 Test: blockdev write zeroes read no split ...passed 00:24:45.329 Test: blockdev write zeroes read split ...passed 00:24:45.329 Test: blockdev write zeroes read split partial ...passed 00:24:45.329 Test: blockdev reset ...passed 00:24:45.329 Test: blockdev write read 8 blocks ...passed 00:24:45.329 Test: blockdev write read size > 128k ...passed 00:24:45.329 Test: blockdev write read invalid size ...passed 00:24:45.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:45.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:45.329 Test: blockdev write read max offset ...passed 00:24:45.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:45.329 Test: blockdev writev readv 8 blocks ...passed 00:24:45.329 Test: blockdev writev readv 30 x 1block ...passed 00:24:45.329 Test: blockdev writev readv block ...passed 00:24:45.329 Test: blockdev writev readv size > 128k ...passed 00:24:45.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:45.329 Test: blockdev comparev and writev ...passed 00:24:45.329 Test: blockdev nvme passthru rw ...passed 00:24:45.329 Test: blockdev nvme passthru vendor specific ...passed 00:24:45.329 Test: blockdev nvme admin passthru ...passed 00:24:45.329 Test: blockdev copy ...passed 00:24:45.329 Suite: bdevio tests on: nvme0n2 00:24:45.329 Test: blockdev write read block ...passed 00:24:45.329 Test: blockdev write zeroes read block ...passed 00:24:45.329 Test: blockdev write zeroes read no split ...passed 00:24:45.329 Test: blockdev write zeroes read split ...passed 00:24:45.329 Test: blockdev write zeroes read split partial ...passed 00:24:45.329 Test: blockdev reset ...passed 00:24:45.329 Test: blockdev write read 8 blocks ...passed 00:24:45.329 Test: blockdev write read size > 128k ...passed 00:24:45.329 Test: blockdev write read invalid size ...passed 00:24:45.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:45.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:45.329 Test: blockdev write read max offset ...passed 00:24:45.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:45.329 Test: blockdev writev readv 8 blocks ...passed 00:24:45.329 Test: blockdev writev readv 30 x 1block ...passed 00:24:45.329 Test: blockdev writev readv block ...passed 00:24:45.329 Test: blockdev writev readv size > 128k ...passed 00:24:45.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:45.329 Test: blockdev comparev and writev ...passed 00:24:45.329 Test: blockdev nvme passthru rw ...passed 00:24:45.329 Test: blockdev nvme passthru vendor specific ...passed 00:24:45.329 Test: blockdev nvme admin passthru ...passed 00:24:45.329 Test: blockdev copy ...passed 00:24:45.329 Suite: bdevio tests on: nvme0n1 00:24:45.329 Test: blockdev write read block ...passed 00:24:45.329 Test: blockdev write zeroes read block ...passed 00:24:45.329 Test: blockdev write zeroes read no split ...passed 00:24:45.329 Test: blockdev write zeroes read split ...passed 00:24:45.599 Test: blockdev write zeroes read split partial ...passed 00:24:45.599 Test: blockdev reset ...passed 00:24:45.599 Test: blockdev write read 8 blocks ...passed 00:24:45.599 Test: blockdev write read size > 128k ...passed 00:24:45.599 Test: blockdev write read invalid size ...passed 00:24:45.599 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:45.599 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:45.599 Test: blockdev write read max offset ...passed 00:24:45.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:45.599 Test: blockdev writev readv 8 blocks ...passed 00:24:45.599 Test: blockdev writev readv 30 x 1block ...passed 00:24:45.599 Test: blockdev writev readv block ...passed 00:24:45.599 Test: blockdev writev readv size > 128k ...passed 00:24:45.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:45.599 Test: blockdev comparev and writev ...passed 00:24:45.599 Test: blockdev nvme passthru rw ...passed 00:24:45.599 Test: blockdev nvme passthru vendor specific ...passed 00:24:45.599 Test: blockdev nvme admin passthru ...passed 00:24:45.599 Test: blockdev copy ...passed 00:24:45.599 00:24:45.599 Run Summary: Type Total Ran Passed Failed Inactive 00:24:45.599 suites 6 6 n/a 0 0 00:24:45.599 tests 138 138 138 0 0 00:24:45.599 asserts 780 780 780 0 n/a 00:24:45.599 00:24:45.599 Elapsed time = 1.490 seconds 00:24:45.599 0 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74338 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74338 ']' 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74338 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74338 00:24:45.599 killing process with pid 74338 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74338' 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74338 00:24:45.599 23:06:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74338 00:24:46.975 ************************************ 00:24:46.975 END TEST bdev_bounds 00:24:46.975 ************************************ 00:24:46.975 23:06:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:46.975 00:24:46.975 real 0m2.872s 00:24:46.975 user 0m7.039s 00:24:46.975 sys 0m0.483s 00:24:46.975 23:06:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.975 23:06:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:46.975 23:06:14 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:24:46.975 23:06:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:46.975 23:06:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.975 23:06:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:46.975 ************************************ 00:24:46.975 START TEST bdev_nbd 00:24:46.975 ************************************ 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74404 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74404 /var/tmp/spdk-nbd.sock 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74404 ']' 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:46.975 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:46.976 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:46.976 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.976 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:46.976 [2024-12-09 23:06:14.144725] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:24:46.976 [2024-12-09 23:06:14.145139] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.234 [2024-12-09 23:06:14.323749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.234 [2024-12-09 23:06:14.453494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:47.804 23:06:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.067 1+0 records in 00:24:48.067 1+0 records out 00:24:48.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668702 s, 6.1 MB/s 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:48.067 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.326 1+0 records in 00:24:48.326 1+0 records out 00:24:48.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718465 s, 5.7 MB/s 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:48.326 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:48.895 1+0 records in 00:24:48.895 1+0 records out 00:24:48.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680804 s, 6.0 MB/s 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:48.895 23:06:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:48.895 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.154 1+0 records in 00:24:49.154 1+0 records out 00:24:49.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765981 s, 5.3 MB/s 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.154 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:24:49.412 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:24:49.412 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:24:49.412 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:24:49.412 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:24:49.412 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:49.412 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:49.412 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.413 1+0 records in 00:24:49.413 1+0 records out 00:24:49.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137567 s, 3.0 MB/s 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.413 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:49.671 1+0 records in 00:24:49.671 1+0 records out 00:24:49.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687456 s, 6.0 MB/s 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:24:49.671 23:06:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd0", 00:24:49.930 "bdev_name": "nvme0n1" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd1", 00:24:49.930 "bdev_name": "nvme0n2" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd2", 00:24:49.930 "bdev_name": "nvme0n3" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd3", 00:24:49.930 "bdev_name": "nvme1n1" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd4", 00:24:49.930 "bdev_name": "nvme2n1" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd5", 00:24:49.930 "bdev_name": "nvme3n1" 00:24:49.930 } 00:24:49.930 ]' 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd0", 00:24:49.930 "bdev_name": "nvme0n1" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd1", 00:24:49.930 "bdev_name": "nvme0n2" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd2", 00:24:49.930 "bdev_name": "nvme0n3" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd3", 00:24:49.930 "bdev_name": "nvme1n1" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd4", 00:24:49.930 "bdev_name": "nvme2n1" 00:24:49.930 }, 00:24:49.930 { 00:24:49.930 "nbd_device": "/dev/nbd5", 00:24:49.930 "bdev_name": "nvme3n1" 00:24:49.930 } 00:24:49.930 ]' 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:49.930 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.188 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.446 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.711 23:06:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:50.969 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:51.227 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:51.484 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:51.484 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:51.484 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:51.741 23:06:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:24:51.741 /dev/nbd0 00:24:51.741 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:51.999 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:51.999 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.000 1+0 records in 00:24:52.000 1+0 records out 00:24:52.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554385 s, 7.4 MB/s 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.000 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:24:52.258 /dev/nbd1 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.258 1+0 records in 00:24:52.258 1+0 records out 00:24:52.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724948 s, 5.7 MB/s 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.258 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:24:52.515 /dev/nbd10 00:24:52.515 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.516 1+0 records in 00:24:52.516 1+0 records out 00:24:52.516 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652687 s, 6.3 MB/s 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.516 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:24:52.774 /dev/nbd11 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:52.774 1+0 records in 00:24:52.774 1+0 records out 00:24:52.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720057 s, 5.7 MB/s 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:52.774 23:06:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:24:53.032 /dev/nbd12 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.032 1+0 records in 00:24:53.032 1+0 records out 00:24:53.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108142 s, 3.8 MB/s 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:53.032 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:24:53.291 /dev/nbd13 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:53.291 1+0 records in 00:24:53.291 1+0 records out 00:24:53.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770199 s, 5.3 MB/s 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:53.291 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:53.549 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd0", 00:24:53.549 "bdev_name": "nvme0n1" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd1", 00:24:53.549 "bdev_name": "nvme0n2" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd10", 00:24:53.549 "bdev_name": "nvme0n3" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd11", 00:24:53.549 "bdev_name": "nvme1n1" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd12", 00:24:53.549 "bdev_name": "nvme2n1" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd13", 00:24:53.549 "bdev_name": "nvme3n1" 00:24:53.549 } 00:24:53.549 ]' 00:24:53.549 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd0", 00:24:53.549 "bdev_name": "nvme0n1" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd1", 00:24:53.549 "bdev_name": "nvme0n2" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd10", 00:24:53.549 "bdev_name": "nvme0n3" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd11", 00:24:53.549 "bdev_name": "nvme1n1" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd12", 00:24:53.549 "bdev_name": "nvme2n1" 00:24:53.549 }, 00:24:53.549 { 00:24:53.549 "nbd_device": "/dev/nbd13", 00:24:53.549 "bdev_name": "nvme3n1" 00:24:53.549 } 00:24:53.549 ]' 00:24:53.549 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:53.549 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:53.549 /dev/nbd1 00:24:53.549 /dev/nbd10 00:24:53.549 /dev/nbd11 00:24:53.549 /dev/nbd12 00:24:53.549 /dev/nbd13' 00:24:53.549 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:53.549 /dev/nbd1 00:24:53.549 /dev/nbd10 00:24:53.549 /dev/nbd11 00:24:53.549 /dev/nbd12 00:24:53.550 /dev/nbd13' 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:53.550 256+0 records in 00:24:53.550 256+0 records out 00:24:53.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010733 s, 97.7 MB/s 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:53.550 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:53.811 256+0 records in 00:24:53.811 256+0 records out 00:24:53.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119667 s, 8.8 MB/s 00:24:53.811 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:53.811 23:06:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:53.811 256+0 records in 00:24:53.811 256+0 records out 00:24:53.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125786 s, 8.3 MB/s 00:24:53.811 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:53.811 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:24:54.071 256+0 records in 00:24:54.071 256+0 records out 00:24:54.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126182 s, 8.3 MB/s 00:24:54.071 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:54.071 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:24:54.071 256+0 records in 00:24:54.071 256+0 records out 00:24:54.071 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124667 s, 8.4 MB/s 00:24:54.071 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:54.071 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:24:54.329 256+0 records in 00:24:54.329 256+0 records out 00:24:54.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149722 s, 7.0 MB/s 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:24:54.329 256+0 records in 00:24:54.329 256+0 records out 00:24:54.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126867 s, 8.3 MB/s 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:24:54.329 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.330 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:24:54.330 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:54.330 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.587 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.844 23:06:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:54.844 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.102 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.360 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.618 23:06:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:55.876 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:56.134 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:56.393 malloc_lvol_verify 00:24:56.393 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:56.652 bc6ee373-f2f0-4fb1-86c3-32fe6c1f524d 00:24:56.652 23:06:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:56.913 50f672ba-2f31-411c-8798-56b59c2bfcd9 00:24:56.913 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:57.171 /dev/nbd0 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:57.171 mke2fs 1.47.0 (5-Feb-2023) 00:24:57.171 Discarding device blocks: 0/4096 done 00:24:57.171 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:57.171 00:24:57.171 Allocating group tables: 0/1 done 00:24:57.171 Writing inode tables: 0/1 done 00:24:57.171 Creating journal (1024 blocks): done 00:24:57.171 Writing superblocks and filesystem accounting information: 0/1 done 00:24:57.171 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:57.171 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74404 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74404 ']' 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74404 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74404 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74404' 00:24:57.430 killing process with pid 74404 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74404 00:24:57.430 23:06:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74404 00:24:58.813 23:06:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:58.813 00:24:58.813 real 0m11.884s 00:24:58.813 user 0m15.370s 00:24:58.813 sys 0m5.045s 00:24:58.813 23:06:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.813 23:06:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:58.813 ************************************ 00:24:58.813 END TEST bdev_nbd 00:24:58.813 ************************************ 00:24:58.813 23:06:25 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:24:58.813 23:06:25 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:24:58.813 23:06:25 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:24:58.813 23:06:25 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:24:58.813 23:06:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.813 23:06:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.813 23:06:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:58.813 ************************************ 00:24:58.813 START TEST bdev_fio 00:24:58.813 ************************************ 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:24:58.813 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:24:58.813 23:06:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:24:58.813 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:24:58.813 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:24:58.813 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:24:58.813 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:24:58.813 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:24:58.814 ************************************ 00:24:58.814 START TEST bdev_fio_rw_verify 00:24:58.814 ************************************ 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:58.814 23:06:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:24:59.181 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:59.181 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:59.181 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:59.181 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:59.181 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:59.181 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:24:59.181 fio-3.35 00:24:59.181 Starting 6 threads 00:25:11.505 00:25:11.505 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74819: Mon Dec 9 23:06:37 2024 00:25:11.505 read: IOPS=32.3k, BW=126MiB/s (132MB/s)(1262MiB/10001msec) 00:25:11.505 slat (usec): min=2, max=678, avg= 6.98, stdev= 5.09 00:25:11.505 clat (usec): min=114, max=4602, avg=566.77, stdev=225.66 00:25:11.505 lat (usec): min=117, max=4609, avg=573.75, stdev=226.68 00:25:11.505 clat percentiles (usec): 00:25:11.505 | 50.000th=[ 570], 99.000th=[ 1156], 99.900th=[ 1827], 99.990th=[ 4293], 00:25:11.505 | 99.999th=[ 4555] 00:25:11.505 write: IOPS=32.8k, BW=128MiB/s (134MB/s)(1280MiB/10001msec); 0 zone resets 00:25:11.505 slat (usec): min=11, max=2384, avg=25.16, stdev=33.71 00:25:11.505 clat (usec): min=87, max=8231, avg=659.94, stdev=264.68 00:25:11.505 lat (usec): min=105, max=8389, avg=685.10, stdev=269.89 00:25:11.505 clat percentiles (usec): 00:25:11.505 | 50.000th=[ 652], 99.000th=[ 1467], 99.900th=[ 2245], 99.990th=[ 4424], 00:25:11.505 | 99.999th=[ 8160] 00:25:11.505 bw ( KiB/s): min=103215, max=155793, per=100.00%, avg=131797.68, stdev=2437.14, samples=114 00:25:11.505 iops : min=25803, max=38948, avg=32949.11, stdev=609.28, samples=114 00:25:11.505 lat (usec) : 100=0.01%, 250=5.27%, 500=26.55%, 750=44.71%, 1000=18.31% 00:25:11.505 lat (msec) : 2=5.01%, 4=0.12%, 10=0.02% 00:25:11.505 cpu : usr=58.39%, sys=27.31%, ctx=7764, majf=0, minf=27050 00:25:11.505 IO depths : 1=11.8%, 2=24.2%, 4=50.8%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:11.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.505 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:11.505 issued rwts: total=323098,327765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:11.505 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:11.505 00:25:11.505 Run status group 0 (all jobs): 00:25:11.505 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=1262MiB (1323MB), run=10001-10001msec 00:25:11.505 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=1280MiB (1343MB), run=10001-10001msec 00:25:11.505 ----------------------------------------------------- 00:25:11.505 Suppressions used: 00:25:11.505 count bytes template 00:25:11.505 6 48 /usr/src/fio/parse.c 00:25:11.505 4449 427104 /usr/src/fio/iolog.c 00:25:11.505 1 8 libtcmalloc_minimal.so 00:25:11.505 1 904 libcrypto.so 00:25:11.505 ----------------------------------------------------- 00:25:11.505 00:25:11.505 00:25:11.505 real 0m12.640s 00:25:11.505 user 0m37.098s 00:25:11.505 sys 0m16.894s 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:25:11.505 ************************************ 00:25:11.505 END TEST bdev_fio_rw_verify 00:25:11.505 ************************************ 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:25:11.505 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.506 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:25:11.506 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:25:11.506 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:25:11.506 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:25:11.506 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:25:11.506 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "87edb157-b0bb-4312-99b3-2fdc4c2f4e4c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "87edb157-b0bb-4312-99b3-2fdc4c2f4e4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "392b65c4-287e-43b3-a434-0d328c8fc3b1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "392b65c4-287e-43b3-a434-0d328c8fc3b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "81d59512-296c-48e6-884f-f70fb7aef400"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81d59512-296c-48e6-884f-f70fb7aef400",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d5f264b7-bce8-4489-8aa3-cdf70a3ed340"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d5f264b7-bce8-4489-8aa3-cdf70a3ed340",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "22dce233-940c-4f23-8614-f6eb290ddcb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "22dce233-940c-4f23-8614-f6eb290ddcb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "aeb31623-b1e1-4a5d-a345-11ac312c04c0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "aeb31623-b1e1-4a5d-a345-11ac312c04c0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:25:11.779 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:25:11.779 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:11.779 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:25:11.779 /home/vagrant/spdk_repo/spdk 00:25:11.779 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:25:11.779 23:06:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:25:11.779 00:25:11.779 real 0m12.891s 00:25:11.779 user 0m37.226s 00:25:11.779 sys 0m17.021s 00:25:11.779 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.779 23:06:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:11.779 ************************************ 00:25:11.779 END TEST bdev_fio 00:25:11.779 ************************************ 00:25:11.779 23:06:38 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:11.779 23:06:38 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:11.779 23:06:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:11.779 23:06:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.779 23:06:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:11.779 ************************************ 00:25:11.779 START TEST bdev_verify 00:25:11.779 ************************************ 00:25:11.779 23:06:38 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:11.779 [2024-12-09 23:06:39.063662] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:11.779 [2024-12-09 23:06:39.063821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74997 ] 00:25:12.041 [2024-12-09 23:06:39.253519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:12.300 [2024-12-09 23:06:39.390059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.300 [2024-12-09 23:06:39.390108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.892 Running I/O for 5 seconds... 00:25:15.225 23040.00 IOPS, 90.00 MiB/s [2024-12-09T23:06:43.128Z] 22495.50 IOPS, 87.87 MiB/s [2024-12-09T23:06:44.509Z] 22432.00 IOPS, 87.62 MiB/s [2024-12-09T23:06:45.079Z] 22424.00 IOPS, 87.59 MiB/s [2024-12-09T23:06:45.079Z] 22515.20 IOPS, 87.95 MiB/s 00:25:17.743 Latency(us) 00:25:17.743 [2024-12-09T23:06:45.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.743 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x0 length 0x80000 00:25:17.743 nvme0n1 : 5.05 1647.58 6.44 0.00 0.00 77579.26 8738.13 78748.48 00:25:17.743 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x80000 length 0x80000 00:25:17.743 nvme0n1 : 5.06 1772.12 6.92 0.00 0.00 71665.65 11422.74 61903.88 00:25:17.743 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x0 length 0x80000 00:25:17.743 nvme0n2 : 5.05 1646.66 6.43 0.00 0.00 77514.02 15370.69 72010.64 00:25:17.743 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x80000 length 0x80000 00:25:17.743 nvme0n2 : 5.06 1771.07 6.92 0.00 0.00 71609.62 9790.92 63588.34 00:25:17.743 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x0 length 0x80000 00:25:17.743 nvme0n3 : 5.05 1646.26 6.43 0.00 0.00 77455.53 12791.36 72431.76 00:25:17.743 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x80000 length 0x80000 00:25:17.743 nvme0n3 : 5.07 1793.62 7.01 0.00 0.00 70600.37 4316.43 64009.46 00:25:17.743 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x0 length 0x20000 00:25:17.743 nvme1n1 : 5.03 1629.46 6.37 0.00 0.00 78104.05 7843.26 77064.02 00:25:17.743 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x20000 length 0x20000 00:25:17.743 nvme1n1 : 5.02 1758.98 6.87 0.00 0.00 72658.97 14739.02 68220.61 00:25:17.743 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x0 length 0xbd0bd 00:25:17.743 nvme2n1 : 5.06 2587.81 10.11 0.00 0.00 48992.76 2131.89 64430.57 00:25:17.743 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:25:17.743 nvme2n1 : 5.04 2773.59 10.83 0.00 0.00 45926.63 5527.13 55166.05 00:25:17.743 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0x0 length 0xa0000 00:25:17.743 nvme3n1 : 5.07 1641.44 6.41 0.00 0.00 77249.58 2724.09 91381.92 00:25:17.743 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:17.743 Verification LBA range: start 0xa0000 length 0xa0000 00:25:17.743 nvme3n1 : 5.05 1747.34 6.83 0.00 0.00 72842.26 8790.77 74116.22 00:25:17.743 [2024-12-09T23:06:45.079Z] =================================================================================================================== 00:25:17.743 [2024-12-09T23:06:45.079Z] Total : 22415.94 87.56 0.00 0.00 68110.83 2131.89 91381.92 00:25:19.120 00:25:19.120 real 0m7.377s 00:25:19.120 user 0m10.896s 00:25:19.120 sys 0m2.378s 00:25:19.120 23:06:46 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.120 23:06:46 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:19.120 ************************************ 00:25:19.120 END TEST bdev_verify 00:25:19.120 ************************************ 00:25:19.120 23:06:46 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:19.120 23:06:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:25:19.120 23:06:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.120 23:06:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:19.120 ************************************ 00:25:19.120 START TEST bdev_verify_big_io 00:25:19.120 ************************************ 00:25:19.120 23:06:46 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:19.379 [2024-12-09 23:06:46.508467] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:19.379 [2024-12-09 23:06:46.508616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75103 ] 00:25:19.379 [2024-12-09 23:06:46.685516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:19.637 [2024-12-09 23:06:46.820940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.637 [2024-12-09 23:06:46.820971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.204 Running I/O for 5 seconds... 00:25:26.026 3040.00 IOPS, 190.00 MiB/s [2024-12-09T23:06:53.362Z] 3693.00 IOPS, 230.81 MiB/s [2024-12-09T23:06:53.362Z] 3726.67 IOPS, 232.92 MiB/s 00:25:26.026 Latency(us) 00:25:26.026 [2024-12-09T23:06:53.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.026 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.026 Verification LBA range: start 0x0 length 0x8000 00:25:26.026 nvme0n1 : 5.43 220.85 13.80 0.00 0.00 555258.72 18844.89 731055.40 00:25:26.026 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.026 Verification LBA range: start 0x8000 length 0x8000 00:25:26.026 nvme0n1 : 5.55 115.35 7.21 0.00 0.00 1076049.47 61482.77 1953972.95 00:25:26.026 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.026 Verification LBA range: start 0x0 length 0x8000 00:25:26.026 nvme0n2 : 5.60 188.60 11.79 0.00 0.00 646051.61 94329.73 896132.42 00:25:26.027 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0x8000 length 0x8000 00:25:26.027 nvme0n2 : 5.68 114.05 7.13 0.00 0.00 1046048.01 145705.74 970248.64 00:25:26.027 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0x0 length 0x8000 00:25:26.027 nvme0n3 : 5.61 193.98 12.12 0.00 0.00 617005.26 67378.38 956772.96 00:25:26.027 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0x8000 length 0x8000 00:25:26.027 nvme0n3 : 5.55 121.04 7.56 0.00 0.00 987125.82 211399.66 1131956.74 00:25:26.027 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0x0 length 0x2000 00:25:26.027 nvme1n1 : 5.60 202.78 12.67 0.00 0.00 569329.13 59377.20 1017413.50 00:25:26.027 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0x2000 length 0x2000 00:25:26.027 nvme1n1 : 5.71 148.61 9.29 0.00 0.00 799302.94 9264.53 1017413.50 00:25:26.027 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0x0 length 0xbd0b 00:25:26.027 nvme2n1 : 5.62 239.61 14.98 0.00 0.00 478012.52 4553.30 781589.18 00:25:26.027 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0xbd0b length 0xbd0b 00:25:26.027 nvme2n1 : 5.70 154.42 9.65 0.00 0.00 752523.67 10948.99 1172383.77 00:25:26.027 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0x0 length 0xa000 00:25:26.027 nvme3n1 : 5.67 233.13 14.57 0.00 0.00 480754.69 1033.05 1266713.50 00:25:26.027 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:26.027 Verification LBA range: start 0xa000 length 0xa000 00:25:26.027 nvme3n1 : 5.71 152.75 9.55 0.00 0.00 740034.83 6264.08 1259975.66 00:25:26.027 [2024-12-09T23:06:53.363Z] =================================================================================================================== 00:25:26.027 [2024-12-09T23:06:53.363Z] Total : 2085.17 130.32 0.00 0.00 679942.14 1033.05 1953972.95 00:25:27.929 00:25:27.929 real 0m8.417s 00:25:27.929 user 0m15.069s 00:25:27.929 sys 0m0.679s 00:25:27.929 ************************************ 00:25:27.929 END TEST bdev_verify_big_io 00:25:27.929 ************************************ 00:25:27.929 23:06:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.929 23:06:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:27.929 23:06:54 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:27.929 23:06:54 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:27.929 23:06:54 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.929 23:06:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:27.929 ************************************ 00:25:27.929 START TEST bdev_write_zeroes 00:25:27.929 ************************************ 00:25:27.929 23:06:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:27.929 [2024-12-09 23:06:55.001901] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:27.929 [2024-12-09 23:06:55.002142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75217 ] 00:25:27.929 [2024-12-09 23:06:55.186349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.188 [2024-12-09 23:06:55.321266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.758 Running I/O for 1 seconds... 00:25:29.700 54944.00 IOPS, 214.62 MiB/s 00:25:29.700 Latency(us) 00:25:29.700 [2024-12-09T23:06:57.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.700 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.700 nvme0n1 : 1.03 8857.48 34.60 0.00 0.00 14437.51 8422.30 25161.61 00:25:29.700 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.700 nvme0n2 : 1.03 8848.34 34.56 0.00 0.00 14441.78 8738.13 25372.17 00:25:29.700 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.700 nvme0n3 : 1.03 8839.76 34.53 0.00 0.00 14444.70 8738.13 25793.29 00:25:29.700 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.700 nvme1n1 : 1.03 8831.27 34.50 0.00 0.00 14448.72 8790.77 26214.40 00:25:29.700 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.700 nvme2n1 : 1.02 10711.00 41.84 0.00 0.00 11902.07 4448.03 25898.56 00:25:29.700 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:29.700 nvme3n1 : 1.03 8822.77 34.46 0.00 0.00 14380.61 4342.75 26530.24 00:25:29.700 [2024-12-09T23:06:57.036Z] =================================================================================================================== 00:25:29.700 [2024-12-09T23:06:57.036Z] Total : 54910.62 214.49 0.00 0.00 13938.71 4342.75 26530.24 00:25:31.089 00:25:31.090 real 0m3.172s 00:25:31.090 user 0m2.312s 00:25:31.090 sys 0m0.659s 00:25:31.090 23:06:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.090 23:06:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:31.090 ************************************ 00:25:31.090 END TEST bdev_write_zeroes 00:25:31.090 ************************************ 00:25:31.090 23:06:58 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:31.090 23:06:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:31.090 23:06:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.090 23:06:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:31.090 ************************************ 00:25:31.090 START TEST bdev_json_nonenclosed 00:25:31.090 ************************************ 00:25:31.090 23:06:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:31.090 [2024-12-09 23:06:58.253866] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:31.090 [2024-12-09 23:06:58.254019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75276 ] 00:25:31.349 [2024-12-09 23:06:58.441841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.349 [2024-12-09 23:06:58.571983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.349 [2024-12-09 23:06:58.572101] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:31.349 [2024-12-09 23:06:58.572125] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:31.349 [2024-12-09 23:06:58.572139] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:31.607 00:25:31.607 real 0m0.697s 00:25:31.607 user 0m0.428s 00:25:31.607 sys 0m0.163s 00:25:31.607 23:06:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.607 23:06:58 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:31.607 ************************************ 00:25:31.607 END TEST bdev_json_nonenclosed 00:25:31.607 ************************************ 00:25:31.607 23:06:58 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:31.607 23:06:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:31.607 23:06:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.607 23:06:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:31.607 ************************************ 00:25:31.607 START TEST bdev_json_nonarray 00:25:31.607 ************************************ 00:25:31.607 23:06:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:31.867 [2024-12-09 23:06:59.027786] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:31.867 [2024-12-09 23:06:59.027937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75302 ] 00:25:32.125 [2024-12-09 23:06:59.215496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.125 [2024-12-09 23:06:59.344245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.125 [2024-12-09 23:06:59.344372] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:32.125 [2024-12-09 23:06:59.344397] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:32.125 [2024-12-09 23:06:59.344410] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:32.384 00:25:32.384 real 0m0.688s 00:25:32.384 user 0m0.415s 00:25:32.384 sys 0m0.166s 00:25:32.384 23:06:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.384 23:06:59 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:32.384 ************************************ 00:25:32.384 END TEST bdev_json_nonarray 00:25:32.384 ************************************ 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:25:32.384 23:06:59 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:33.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:34.254 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:34.254 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:35.626 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:35.626 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:35.626 00:25:35.626 real 1m0.086s 00:25:35.626 user 1m37.810s 00:25:35.626 sys 0m35.804s 00:25:35.626 23:07:02 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.626 23:07:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:35.626 ************************************ 00:25:35.626 END TEST blockdev_xnvme 00:25:35.626 ************************************ 00:25:35.884 23:07:02 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:35.884 23:07:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:35.884 23:07:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.884 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:25:35.884 ************************************ 00:25:35.884 START TEST ublk 00:25:35.884 ************************************ 00:25:35.884 23:07:02 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:25:35.884 * Looking for test storage... 00:25:35.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:25:35.884 23:07:03 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:35.884 23:07:03 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:25:35.884 23:07:03 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:35.884 23:07:03 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.884 23:07:03 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.884 23:07:03 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.884 23:07:03 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.884 23:07:03 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.884 23:07:03 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.884 23:07:03 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.884 23:07:03 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.884 23:07:03 ublk -- scripts/common.sh@344 -- # case "$op" in 00:25:35.884 23:07:03 ublk -- scripts/common.sh@345 -- # : 1 00:25:35.884 23:07:03 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.884 23:07:03 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.884 23:07:03 ublk -- scripts/common.sh@365 -- # decimal 1 00:25:35.884 23:07:03 ublk -- scripts/common.sh@353 -- # local d=1 00:25:35.884 23:07:03 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.884 23:07:03 ublk -- scripts/common.sh@355 -- # echo 1 00:25:35.884 23:07:03 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.884 23:07:03 ublk -- scripts/common.sh@366 -- # decimal 2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@353 -- # local d=2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.884 23:07:03 ublk -- scripts/common.sh@355 -- # echo 2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.884 23:07:03 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.885 23:07:03 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.885 23:07:03 ublk -- scripts/common.sh@368 -- # return 0 00:25:35.885 23:07:03 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.885 23:07:03 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:35.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.885 --rc genhtml_branch_coverage=1 00:25:35.885 --rc genhtml_function_coverage=1 00:25:35.885 --rc genhtml_legend=1 00:25:35.885 --rc geninfo_all_blocks=1 00:25:35.885 --rc geninfo_unexecuted_blocks=1 00:25:35.885 00:25:35.885 ' 00:25:35.885 23:07:03 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:35.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.885 --rc genhtml_branch_coverage=1 00:25:35.885 --rc genhtml_function_coverage=1 00:25:35.885 --rc genhtml_legend=1 00:25:35.885 --rc geninfo_all_blocks=1 00:25:35.885 --rc geninfo_unexecuted_blocks=1 00:25:35.885 00:25:35.885 ' 00:25:35.885 23:07:03 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:35.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.885 --rc genhtml_branch_coverage=1 00:25:35.885 --rc genhtml_function_coverage=1 00:25:35.885 --rc genhtml_legend=1 00:25:35.885 --rc geninfo_all_blocks=1 00:25:35.885 --rc geninfo_unexecuted_blocks=1 00:25:35.885 00:25:35.885 ' 00:25:35.885 23:07:03 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:35.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.885 --rc genhtml_branch_coverage=1 00:25:35.885 --rc genhtml_function_coverage=1 00:25:35.885 --rc genhtml_legend=1 00:25:35.885 --rc geninfo_all_blocks=1 00:25:35.885 --rc geninfo_unexecuted_blocks=1 00:25:35.885 00:25:35.885 ' 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:25:35.885 23:07:03 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:25:35.885 23:07:03 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:25:35.885 23:07:03 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:25:35.885 23:07:03 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:25:35.885 23:07:03 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:25:35.885 23:07:03 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:25:35.885 23:07:03 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:25:35.885 23:07:03 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:25:35.885 23:07:03 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:25:36.144 23:07:03 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:25:36.144 23:07:03 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:36.144 23:07:03 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.144 23:07:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:36.144 ************************************ 00:25:36.144 START TEST test_save_ublk_config 00:25:36.144 ************************************ 00:25:36.144 23:07:03 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:25:36.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.144 23:07:03 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75600 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75600 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75600 ']' 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:36.145 23:07:03 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:25:36.145 [2024-12-09 23:07:03.351363] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:36.145 [2024-12-09 23:07:03.351527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75600 ] 00:25:36.403 [2024-12-09 23:07:03.553075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.403 [2024-12-09 23:07:03.710638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.780 23:07:04 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.780 23:07:04 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:25:37.780 23:07:04 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:25:37.780 23:07:04 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:25:37.780 23:07:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.780 23:07:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:37.781 [2024-12-09 23:07:04.800485] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:37.781 [2024-12-09 23:07:04.801855] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:37.781 malloc0 00:25:37.781 [2024-12-09 23:07:04.903704] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:37.781 [2024-12-09 23:07:04.903845] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:37.781 [2024-12-09 23:07:04.903864] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:37.781 [2024-12-09 23:07:04.903874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:37.781 [2024-12-09 23:07:04.911726] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:37.781 [2024-12-09 23:07:04.911790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:37.781 [2024-12-09 23:07:04.918600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:37.781 [2024-12-09 23:07:04.918845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:37.781 [2024-12-09 23:07:04.935504] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:37.781 0 00:25:37.781 23:07:04 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.781 23:07:04 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:25:37.781 23:07:04 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.781 23:07:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:38.040 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.040 23:07:05 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:25:38.040 "subsystems": [ 00:25:38.040 { 00:25:38.040 "subsystem": "fsdev", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "fsdev_set_opts", 00:25:38.040 "params": { 00:25:38.040 "fsdev_io_pool_size": 65535, 00:25:38.040 "fsdev_io_cache_size": 256 00:25:38.040 } 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "keyring", 00:25:38.040 "config": [] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "iobuf", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "iobuf_set_options", 00:25:38.040 "params": { 00:25:38.040 "small_pool_count": 8192, 00:25:38.040 "large_pool_count": 1024, 00:25:38.040 "small_bufsize": 8192, 00:25:38.040 "large_bufsize": 135168, 00:25:38.040 "enable_numa": false 00:25:38.040 } 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "sock", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "sock_set_default_impl", 00:25:38.040 "params": { 00:25:38.040 "impl_name": "posix" 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "sock_impl_set_options", 00:25:38.040 "params": { 00:25:38.040 "impl_name": "ssl", 00:25:38.040 "recv_buf_size": 4096, 00:25:38.040 "send_buf_size": 4096, 00:25:38.040 "enable_recv_pipe": true, 00:25:38.040 "enable_quickack": false, 00:25:38.040 "enable_placement_id": 0, 00:25:38.040 "enable_zerocopy_send_server": true, 00:25:38.040 "enable_zerocopy_send_client": false, 00:25:38.040 "zerocopy_threshold": 0, 00:25:38.040 "tls_version": 0, 00:25:38.040 "enable_ktls": false 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "sock_impl_set_options", 00:25:38.040 "params": { 00:25:38.040 "impl_name": "posix", 00:25:38.040 "recv_buf_size": 2097152, 00:25:38.040 "send_buf_size": 2097152, 00:25:38.040 "enable_recv_pipe": true, 00:25:38.040 "enable_quickack": false, 00:25:38.040 "enable_placement_id": 0, 00:25:38.040 "enable_zerocopy_send_server": true, 00:25:38.040 "enable_zerocopy_send_client": false, 00:25:38.040 "zerocopy_threshold": 0, 00:25:38.040 "tls_version": 0, 00:25:38.040 "enable_ktls": false 00:25:38.040 } 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "vmd", 00:25:38.040 "config": [] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "accel", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "accel_set_options", 00:25:38.040 "params": { 00:25:38.040 "small_cache_size": 128, 00:25:38.040 "large_cache_size": 16, 00:25:38.040 "task_count": 2048, 00:25:38.040 "sequence_count": 2048, 00:25:38.040 "buf_count": 2048 00:25:38.040 } 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "bdev", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "bdev_set_options", 00:25:38.040 "params": { 00:25:38.040 "bdev_io_pool_size": 65535, 00:25:38.040 "bdev_io_cache_size": 256, 00:25:38.040 "bdev_auto_examine": true, 00:25:38.040 "iobuf_small_cache_size": 128, 00:25:38.040 "iobuf_large_cache_size": 16 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "bdev_raid_set_options", 00:25:38.040 "params": { 00:25:38.040 "process_window_size_kb": 1024, 00:25:38.040 "process_max_bandwidth_mb_sec": 0 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "bdev_iscsi_set_options", 00:25:38.040 "params": { 00:25:38.040 "timeout_sec": 30 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "bdev_nvme_set_options", 00:25:38.040 "params": { 00:25:38.040 "action_on_timeout": "none", 00:25:38.040 "timeout_us": 0, 00:25:38.040 "timeout_admin_us": 0, 00:25:38.040 "keep_alive_timeout_ms": 10000, 00:25:38.040 "arbitration_burst": 0, 00:25:38.040 "low_priority_weight": 0, 00:25:38.040 "medium_priority_weight": 0, 00:25:38.040 "high_priority_weight": 0, 00:25:38.040 "nvme_adminq_poll_period_us": 10000, 00:25:38.040 "nvme_ioq_poll_period_us": 0, 00:25:38.040 "io_queue_requests": 0, 00:25:38.040 "delay_cmd_submit": true, 00:25:38.040 "transport_retry_count": 4, 00:25:38.040 "bdev_retry_count": 3, 00:25:38.040 "transport_ack_timeout": 0, 00:25:38.040 "ctrlr_loss_timeout_sec": 0, 00:25:38.040 "reconnect_delay_sec": 0, 00:25:38.040 "fast_io_fail_timeout_sec": 0, 00:25:38.040 "disable_auto_failback": false, 00:25:38.040 "generate_uuids": false, 00:25:38.040 "transport_tos": 0, 00:25:38.040 "nvme_error_stat": false, 00:25:38.040 "rdma_srq_size": 0, 00:25:38.040 "io_path_stat": false, 00:25:38.040 "allow_accel_sequence": false, 00:25:38.040 "rdma_max_cq_size": 0, 00:25:38.040 "rdma_cm_event_timeout_ms": 0, 00:25:38.040 "dhchap_digests": [ 00:25:38.040 "sha256", 00:25:38.040 "sha384", 00:25:38.040 "sha512" 00:25:38.040 ], 00:25:38.040 "dhchap_dhgroups": [ 00:25:38.040 "null", 00:25:38.040 "ffdhe2048", 00:25:38.040 "ffdhe3072", 00:25:38.040 "ffdhe4096", 00:25:38.040 "ffdhe6144", 00:25:38.040 "ffdhe8192" 00:25:38.040 ] 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "bdev_nvme_set_hotplug", 00:25:38.040 "params": { 00:25:38.040 "period_us": 100000, 00:25:38.040 "enable": false 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "bdev_malloc_create", 00:25:38.040 "params": { 00:25:38.040 "name": "malloc0", 00:25:38.040 "num_blocks": 8192, 00:25:38.040 "block_size": 4096, 00:25:38.040 "physical_block_size": 4096, 00:25:38.040 "uuid": "13bb5e4a-95a7-4c96-976a-0fe0ce35eabf", 00:25:38.040 "optimal_io_boundary": 0, 00:25:38.040 "md_size": 0, 00:25:38.040 "dif_type": 0, 00:25:38.040 "dif_is_head_of_md": false, 00:25:38.040 "dif_pi_format": 0 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "bdev_wait_for_examine" 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "scsi", 00:25:38.040 "config": null 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "scheduler", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "framework_set_scheduler", 00:25:38.040 "params": { 00:25:38.040 "name": "static" 00:25:38.040 } 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "vhost_scsi", 00:25:38.040 "config": [] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "vhost_blk", 00:25:38.040 "config": [] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "ublk", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "ublk_create_target", 00:25:38.040 "params": { 00:25:38.040 "cpumask": "1" 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "ublk_start_disk", 00:25:38.040 "params": { 00:25:38.040 "bdev_name": "malloc0", 00:25:38.040 "ublk_id": 0, 00:25:38.040 "num_queues": 1, 00:25:38.040 "queue_depth": 128 00:25:38.040 } 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "nbd", 00:25:38.040 "config": [] 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "subsystem": "nvmf", 00:25:38.040 "config": [ 00:25:38.040 { 00:25:38.040 "method": "nvmf_set_config", 00:25:38.040 "params": { 00:25:38.040 "discovery_filter": "match_any", 00:25:38.040 "admin_cmd_passthru": { 00:25:38.040 "identify_ctrlr": false 00:25:38.040 }, 00:25:38.040 "dhchap_digests": [ 00:25:38.040 "sha256", 00:25:38.040 "sha384", 00:25:38.040 "sha512" 00:25:38.040 ], 00:25:38.040 "dhchap_dhgroups": [ 00:25:38.040 "null", 00:25:38.040 "ffdhe2048", 00:25:38.040 "ffdhe3072", 00:25:38.040 "ffdhe4096", 00:25:38.040 "ffdhe6144", 00:25:38.040 "ffdhe8192" 00:25:38.040 ] 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "nvmf_set_max_subsystems", 00:25:38.040 "params": { 00:25:38.040 "max_subsystems": 1024 00:25:38.040 } 00:25:38.040 }, 00:25:38.040 { 00:25:38.040 "method": "nvmf_set_crdt", 00:25:38.040 "params": { 00:25:38.040 "crdt1": 0, 00:25:38.040 "crdt2": 0, 00:25:38.040 "crdt3": 0 00:25:38.040 } 00:25:38.040 } 00:25:38.040 ] 00:25:38.040 }, 00:25:38.040 { 00:25:38.041 "subsystem": "iscsi", 00:25:38.041 "config": [ 00:25:38.041 { 00:25:38.041 "method": "iscsi_set_options", 00:25:38.041 "params": { 00:25:38.041 "node_base": "iqn.2016-06.io.spdk", 00:25:38.041 "max_sessions": 128, 00:25:38.041 "max_connections_per_session": 2, 00:25:38.041 "max_queue_depth": 64, 00:25:38.041 "default_time2wait": 2, 00:25:38.041 "default_time2retain": 20, 00:25:38.041 "first_burst_length": 8192, 00:25:38.041 "immediate_data": true, 00:25:38.041 "allow_duplicated_isid": false, 00:25:38.041 "error_recovery_level": 0, 00:25:38.041 "nop_timeout": 60, 00:25:38.041 "nop_in_interval": 30, 00:25:38.041 "disable_chap": false, 00:25:38.041 "require_chap": false, 00:25:38.041 "mutual_chap": false, 00:25:38.041 "chap_group": 0, 00:25:38.041 "max_large_datain_per_connection": 64, 00:25:38.041 "max_r2t_per_connection": 4, 00:25:38.041 "pdu_pool_size": 36864, 00:25:38.041 "immediate_data_pool_size": 16384, 00:25:38.041 "data_out_pool_size": 2048 00:25:38.041 } 00:25:38.041 } 00:25:38.041 ] 00:25:38.041 } 00:25:38.041 ] 00:25:38.041 }' 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75600 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75600 ']' 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75600 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75600 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.041 killing process with pid 75600 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75600' 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75600 00:25:38.041 23:07:05 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75600 00:25:39.945 [2024-12-09 23:07:07.135421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:39.945 [2024-12-09 23:07:07.177544] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:39.945 [2024-12-09 23:07:07.177755] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:39.945 [2024-12-09 23:07:07.186523] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:39.945 [2024-12-09 23:07:07.186635] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:39.945 [2024-12-09 23:07:07.186657] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:39.945 [2024-12-09 23:07:07.186692] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:39.945 [2024-12-09 23:07:07.186884] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:41.872 23:07:09 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75678 00:25:41.872 23:07:09 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75678 00:25:41.872 23:07:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75678 ']' 00:25:41.872 23:07:09 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.872 23:07:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:25:41.872 "subsystems": [ 00:25:41.872 { 00:25:41.872 "subsystem": "fsdev", 00:25:41.872 "config": [ 00:25:41.872 { 00:25:41.872 "method": "fsdev_set_opts", 00:25:41.872 "params": { 00:25:41.872 "fsdev_io_pool_size": 65535, 00:25:41.872 "fsdev_io_cache_size": 256 00:25:41.872 } 00:25:41.872 } 00:25:41.872 ] 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "subsystem": "keyring", 00:25:41.872 "config": [] 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "subsystem": "iobuf", 00:25:41.872 "config": [ 00:25:41.872 { 00:25:41.872 "method": "iobuf_set_options", 00:25:41.872 "params": { 00:25:41.872 "small_pool_count": 8192, 00:25:41.872 "large_pool_count": 1024, 00:25:41.872 "small_bufsize": 8192, 00:25:41.872 "large_bufsize": 135168, 00:25:41.872 "enable_numa": false 00:25:41.872 } 00:25:41.872 } 00:25:41.872 ] 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "subsystem": "sock", 00:25:41.872 "config": [ 00:25:41.872 { 00:25:41.872 "method": "sock_set_default_impl", 00:25:41.872 "params": { 00:25:41.872 "impl_name": "posix" 00:25:41.872 } 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "method": "sock_impl_set_options", 00:25:41.872 "params": { 00:25:41.872 "impl_name": "ssl", 00:25:41.872 "recv_buf_size": 4096, 00:25:41.872 "send_buf_size": 4096, 00:25:41.872 "enable_recv_pipe": true, 00:25:41.872 "enable_quickack": false, 00:25:41.872 "enable_placement_id": 0, 00:25:41.872 "enable_zerocopy_send_server": true, 00:25:41.872 "enable_zerocopy_send_client": false, 00:25:41.872 "zerocopy_threshold": 0, 00:25:41.872 "tls_version": 0, 00:25:41.872 "enable_ktls": false 00:25:41.872 } 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "method": "sock_impl_set_options", 00:25:41.872 "params": { 00:25:41.872 "impl_name": "posix", 00:25:41.872 "recv_buf_size": 2097152, 00:25:41.872 "send_buf_size": 2097152, 00:25:41.872 "enable_recv_pipe": true, 00:25:41.872 "enable_quickack": false, 00:25:41.872 "enable_placement_id": 0, 00:25:41.872 "enable_zerocopy_send_server": true, 00:25:41.872 "enable_zerocopy_send_client": false, 00:25:41.872 "zerocopy_threshold": 0, 00:25:41.872 "tls_version": 0, 00:25:41.872 "enable_ktls": false 00:25:41.872 } 00:25:41.872 } 00:25:41.872 ] 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "subsystem": "vmd", 00:25:41.872 "config": [] 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "subsystem": "accel", 00:25:41.872 "config": [ 00:25:41.872 { 00:25:41.872 "method": "accel_set_options", 00:25:41.872 "params": { 00:25:41.872 "small_cache_size": 128, 00:25:41.872 "large_cache_size": 16, 00:25:41.872 "task_count": 2048, 00:25:41.872 "sequence_count": 2048, 00:25:41.872 "buf_count": 2048 00:25:41.872 } 00:25:41.872 } 00:25:41.872 ] 00:25:41.872 }, 00:25:41.872 { 00:25:41.872 "subsystem": "bdev", 00:25:41.872 "config": [ 00:25:41.872 { 00:25:41.872 "method": "bdev_set_options", 00:25:41.872 "params": { 00:25:41.872 "bdev_io_pool_size": 65535, 00:25:41.873 "bdev_io_cache_size": 256, 00:25:41.873 "bdev_auto_examine": true, 00:25:41.873 "iobuf_small_cache_size": 128, 00:25:41.873 "iobuf_large_cache_size": 16 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "bdev_raid_set_options", 00:25:41.873 "params": { 00:25:41.873 "process_window_size_kb": 1024, 00:25:41.873 "process_max_bandwidth_mb_sec": 0 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "bdev_iscsi_set_options", 00:25:41.873 "params": { 00:25:41.873 "timeout_sec": 30 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "bdev_nvme_set_options", 00:25:41.873 "params": { 00:25:41.873 "action_on_timeout": "none", 00:25:41.873 "timeout_us": 0, 00:25:41.873 "timeout_admin_us": 0, 00:25:41.873 "keep_alive_timeout_ms": 10000, 00:25:41.873 "arbitration_burst": 0, 00:25:41.873 "low_priority_weight": 0, 00:25:41.873 "medium_priority_weight": 0, 00:25:41.873 "high_priority_weight": 0, 00:25:41.873 "nvme_adminq_poll_period_us": 10000, 00:25:41.873 "nvme_ioq_poll_period_us": 0, 00:25:41.873 "io_queue_requests": 0, 00:25:41.873 "delay_cmd_submit": true, 00:25:41.873 "transport_retry_count": 4, 00:25:41.873 "bdev_retry_count": 3, 00:25:41.873 "transport_ack_timeout": 0, 00:25:41.873 "ctrlr_loss_timeout_sec": 0, 00:25:41.873 "reconnect_delay_sec": 0, 00:25:41.873 "fast_io_fail_timeout_sec": 0, 00:25:41.873 "disable_auto_failback": false, 00:25:41.873 "generate_uuids": false, 00:25:41.873 "transport_tos": 0, 00:25:41.873 "nvme_error_stat": false, 00:25:41.873 "rdma_srq_size": 0, 00:25:41.873 "io_path_stat": false, 00:25:41.873 "allow_accel_sequence": false, 00:25:41.873 "rdma_max_cq_size": 0, 00:25:41.873 "rdma_cm_event_timeout_ms": 0, 00:25:41.873 "dhchap_digests": [ 00:25:41.873 "sha256", 00:25:41.873 "sha384", 00:25:41.873 "sha512" 00:25:41.873 ], 00:25:41.873 "dhchap_dhgroups": [ 00:25:41.873 "null", 00:25:41.873 "ffdhe2048", 00:25:41.873 "ffdhe3072", 00:25:41.873 "ffdhe4096", 00:25:41.873 "ffdhe6144", 00:25:41.873 "ffdhe8192" 00:25:41.873 ] 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "bdev_nvme_set_hotplug", 00:25:41.873 "params": { 00:25:41.873 "period_us": 100000, 00:25:41.873 "enable": false 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "bdev_malloc_create", 00:25:41.873 "params": { 00:25:41.873 "name": "malloc0", 00:25:41.873 "num_blocks": 8192, 00:25:41.873 "block_size": 4096, 00:25:41.873 "physical_block_size": 4096, 00:25:41.873 "uuid": "13bb5e4a-95a7-4c96-976a-0fe0ce35eabf", 00:25:41.873 "optimal_io_boundary": 0, 00:25:41.873 "md_size": 0, 00:25:41.873 "dif_type": 0, 00:25:41.873 "dif_is_head_of_md": false, 00:25:41.873 "dif_pi_format": 0 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "bdev_wait_for_examine" 00:25:41.873 } 00:25:41.873 ] 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "scsi", 00:25:41.873 "config": null 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "scheduler", 00:25:41.873 "config": [ 00:25:41.873 { 00:25:41.873 "method": "framework_set_scheduler", 00:25:41.873 "params": { 00:25:41.873 "name": "static" 00:25:41.873 } 00:25:41.873 } 00:25:41.873 ] 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "vhost_scsi", 00:25:41.873 "config": [] 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "vhost_blk", 00:25:41.873 "config": [] 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "ublk", 00:25:41.873 "config": [ 00:25:41.873 { 00:25:41.873 "method": "ublk_create_target", 00:25:41.873 "params": { 00:25:41.873 "cpumask": "1" 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "ublk_start_disk", 00:25:41.873 "params": { 00:25:41.873 "bdev_name": "malloc0", 00:25:41.873 "ublk_id": 0, 00:25:41.873 "num_queues": 1, 00:25:41.873 "queue_depth": 128 00:25:41.873 } 00:25:41.873 } 00:25:41.873 ] 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "nbd", 00:25:41.873 "config": [] 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "nvmf", 00:25:41.873 "config": [ 00:25:41.873 { 00:25:41.873 "method": "nvmf_set_config", 00:25:41.873 "params": { 00:25:41.873 "discovery_filter": "match_any", 00:25:41.873 "admin_cmd_passthru": { 00:25:41.873 "identify_ctrlr": false 00:25:41.873 }, 00:25:41.873 "dhchap_digests": [ 00:25:41.873 "sha256", 00:25:41.873 "sha384", 00:25:41.873 "sha512" 00:25:41.873 ], 00:25:41.873 "dhchap_dhgroups": [ 00:25:41.873 "null", 00:25:41.873 "ffdhe2048", 00:25:41.873 "ffdhe3072", 00:25:41.873 "ffdhe4096", 00:25:41.873 "ffdhe6144", 00:25:41.873 "ffdhe8192" 00:25:41.873 ] 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "nvmf_set_max_subsystems", 00:25:41.873 "params": { 00:25:41.873 "max_subsystems": 1024 00:25:41.873 } 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "method": "nvmf_set_crdt", 00:25:41.873 "params": { 00:25:41.873 "crdt1": 0, 00:25:41.873 "crdt2": 0, 00:25:41.873 "crdt3": 0 00:25:41.873 } 00:25:41.873 } 00:25:41.873 ] 00:25:41.873 }, 00:25:41.873 { 00:25:41.873 "subsystem": "iscsi", 00:25:41.873 "config": [ 00:25:41.873 { 00:25:41.873 "method": "iscsi_set_options", 00:25:41.873 "params": { 00:25:41.873 "node_base": "iqn.2016-06.io.spdk", 00:25:41.873 "max_sessions": 128, 00:25:41.873 "max_connections_per_session": 2, 00:25:41.873 "max_queue_depth": 64, 00:25:41.873 "default_time2wait": 2, 00:25:41.873 "default_time2retain": 20, 00:25:41.873 "first_burst_length": 8192, 00:25:41.873 "immediate_data": true, 00:25:41.873 "allow_duplicated_isid": false, 00:25:41.873 "error_recovery_level": 0, 00:25:41.873 "nop_timeout": 60, 00:25:41.873 "nop_in_interval": 30, 00:25:41.873 "disable_chap": false, 00:25:41.873 "require_chap": false, 00:25:41.873 "mutual_chap": false, 00:25:41.873 "chap_group": 0, 00:25:41.873 "max_large_datain_per_connection": 64, 00:25:41.873 "max_r2t_per_connection": 4, 00:25:41.873 "pdu_pool_size": 36864, 00:25:41.873 "immediate_data_pool_size": 16384, 00:25:41.873 "data_out_pool_size": 2048 00:25:41.873 } 00:25:41.873 } 00:25:41.873 ] 00:25:41.873 } 00:25:41.873 ] 00:25:41.873 }' 00:25:41.873 23:07:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.873 23:07:09 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:25:41.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.873 23:07:09 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.873 23:07:09 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.873 23:07:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:42.133 [2024-12-09 23:07:09.269900] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:42.133 [2024-12-09 23:07:09.270044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75678 ] 00:25:42.133 [2024-12-09 23:07:09.450560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.392 [2024-12-09 23:07:09.595323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.766 [2024-12-09 23:07:10.815502] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:43.766 [2024-12-09 23:07:10.816683] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:43.766 [2024-12-09 23:07:10.823730] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:25:43.766 [2024-12-09 23:07:10.823875] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:25:43.766 [2024-12-09 23:07:10.823891] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:43.766 [2024-12-09 23:07:10.823899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:43.766 [2024-12-09 23:07:10.832611] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:43.766 [2024-12-09 23:07:10.832655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:43.766 [2024-12-09 23:07:10.839513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:43.766 [2024-12-09 23:07:10.839654] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:43.766 [2024-12-09 23:07:10.856496] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:43.766 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.766 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:25:43.766 23:07:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:25:43.766 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.766 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:43.766 23:07:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75678 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75678 ']' 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75678 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75678 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75678' 00:25:43.767 killing process with pid 75678 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75678 00:25:43.767 23:07:10 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75678 00:25:46.294 [2024-12-09 23:07:13.145580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:25:46.294 [2024-12-09 23:07:13.181625] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:46.294 [2024-12-09 23:07:13.181851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:25:46.294 [2024-12-09 23:07:13.188558] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:46.294 [2024-12-09 23:07:13.188658] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:25:46.294 [2024-12-09 23:07:13.188671] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:25:46.294 [2024-12-09 23:07:13.188704] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:25:46.294 [2024-12-09 23:07:13.188910] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:25:48.198 23:07:15 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:25:48.198 00:25:48.198 real 0m12.004s 00:25:48.198 user 0m8.574s 00:25:48.198 sys 0m4.227s 00:25:48.198 23:07:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.198 ************************************ 00:25:48.198 END TEST test_save_ublk_config 00:25:48.198 ************************************ 00:25:48.198 23:07:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:25:48.198 23:07:15 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75785 00:25:48.198 23:07:15 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:25:48.198 23:07:15 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:48.198 23:07:15 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75785 00:25:48.198 23:07:15 ublk -- common/autotest_common.sh@835 -- # '[' -z 75785 ']' 00:25:48.198 23:07:15 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.198 23:07:15 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:48.198 23:07:15 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.198 23:07:15 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:48.198 23:07:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:48.198 [2024-12-09 23:07:15.423612] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:25:48.198 [2024-12-09 23:07:15.423760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75785 ] 00:25:48.457 [2024-12-09 23:07:15.608934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:48.457 [2024-12-09 23:07:15.761123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.457 [2024-12-09 23:07:15.761140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.832 23:07:16 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:49.832 23:07:16 ublk -- common/autotest_common.sh@868 -- # return 0 00:25:49.832 23:07:16 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:25:49.832 23:07:16 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:49.832 23:07:16 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:49.832 23:07:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:25:49.832 ************************************ 00:25:49.832 START TEST test_create_ublk 00:25:49.832 ************************************ 00:25:49.832 23:07:16 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:25:49.832 23:07:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:25:49.832 23:07:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.832 23:07:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:49.832 [2024-12-09 23:07:16.824480] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:25:49.832 [2024-12-09 23:07:16.828087] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:25:49.832 23:07:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.832 23:07:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:25:49.832 23:07:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:25:49.832 23:07:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.832 23:07:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:49.832 23:07:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.832 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:25:49.832 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:25:49.832 23:07:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.832 23:07:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:49.832 [2024-12-09 23:07:17.146781] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:25:49.832 [2024-12-09 23:07:17.147320] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:25:49.832 [2024-12-09 23:07:17.147348] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:25:49.832 [2024-12-09 23:07:17.147360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:25:49.832 [2024-12-09 23:07:17.154559] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:25:49.832 [2024-12-09 23:07:17.154606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:25:49.832 [2024-12-09 23:07:17.162543] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:25:49.832 [2024-12-09 23:07:17.163360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:25:50.091 [2024-12-09 23:07:17.185554] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:25:50.091 23:07:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:25:50.091 23:07:17 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.091 23:07:17 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:25:50.091 23:07:17 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:25:50.091 { 00:25:50.091 "ublk_device": "/dev/ublkb0", 00:25:50.091 "id": 0, 00:25:50.091 "queue_depth": 512, 00:25:50.091 "num_queues": 4, 00:25:50.091 "bdev_name": "Malloc0" 00:25:50.091 } 00:25:50.091 ]' 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:25:50.091 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:25:50.351 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:25:50.351 23:07:17 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:25:50.351 23:07:17 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:25:50.351 fio: verification read phase will never start because write phase uses all of runtime 00:25:50.351 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:25:50.351 fio-3.35 00:25:50.351 Starting 1 process 00:26:02.602 00:26:02.602 fio_test: (groupid=0, jobs=1): err= 0: pid=75838: Mon Dec 9 23:07:27 2024 00:26:02.602 write: IOPS=12.8k, BW=49.8MiB/s (52.2MB/s)(498MiB/10001msec); 0 zone resets 00:26:02.602 clat (usec): min=43, max=12618, avg=77.39, stdev=161.35 00:26:02.602 lat (usec): min=43, max=12619, avg=77.97, stdev=161.43 00:26:02.602 clat percentiles (usec): 00:26:02.602 | 1.00th=[ 56], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 63], 00:26:02.602 | 30.00th=[ 65], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 69], 00:26:02.602 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 81], 95.00th=[ 90], 00:26:02.602 | 99.00th=[ 119], 99.50th=[ 190], 99.90th=[ 3195], 99.95th=[ 3752], 00:26:02.602 | 99.99th=[ 4228] 00:26:02.602 bw ( KiB/s): min=18008, max=55560, per=99.15%, avg=50576.42, stdev=8569.08, samples=19 00:26:02.602 iops : min= 4502, max=13890, avg=12644.11, stdev=2142.27, samples=19 00:26:02.602 lat (usec) : 50=0.03%, 100=97.45%, 250=2.08%, 500=0.06%, 750=0.04% 00:26:02.602 lat (usec) : 1000=0.04% 00:26:02.602 lat (msec) : 2=0.10%, 4=0.17%, 10=0.02%, 20=0.01% 00:26:02.602 cpu : usr=2.89%, sys=8.63%, ctx=127744, majf=0, minf=796 00:26:02.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:02.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:02.602 issued rwts: total=0,127542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:02.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:02.602 00:26:02.602 Run status group 0 (all jobs): 00:26:02.602 WRITE: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=498MiB (522MB), run=10001-10001msec 00:26:02.602 00:26:02.602 Disk stats (read/write): 00:26:02.602 ublkb0: ios=0/125986, merge=0/0, ticks=0/8553, in_queue=8554, util=97.90% 00:26:02.602 23:07:27 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.602 [2024-12-09 23:07:27.738684] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:26:02.602 [2024-12-09 23:07:27.774157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:02.602 [2024-12-09 23:07:27.775141] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:26:02.602 [2024-12-09 23:07:27.782541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:02.602 [2024-12-09 23:07:27.782876] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:26:02.602 [2024-12-09 23:07:27.782897] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.602 23:07:27 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.602 [2024-12-09 23:07:27.806640] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:26:02.602 request: 00:26:02.602 { 00:26:02.602 "ublk_id": 0, 00:26:02.602 "method": "ublk_stop_disk", 00:26:02.602 "req_id": 1 00:26:02.602 } 00:26:02.602 Got JSON-RPC error response 00:26:02.602 response: 00:26:02.602 { 00:26:02.602 "code": -19, 00:26:02.602 "message": "No such device" 00:26:02.602 } 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:02.602 23:07:27 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.602 [2024-12-09 23:07:27.837656] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:26:02.602 [2024-12-09 23:07:27.845488] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:26:02.602 [2024-12-09 23:07:27.845591] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.602 23:07:27 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.602 23:07:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.602 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.602 23:07:28 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:02.602 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.602 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.602 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:02.602 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.602 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.602 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:26:02.602 ************************************ 00:26:02.602 END TEST test_create_ublk 00:26:02.602 ************************************ 00:26:02.602 23:07:28 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:26:02.602 00:26:02.603 real 0m11.915s 00:26:02.603 user 0m0.713s 00:26:02.603 sys 0m1.022s 00:26:02.603 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.603 23:07:28 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 23:07:28 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:26:02.603 23:07:28 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:02.603 23:07:28 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.603 23:07:28 ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 ************************************ 00:26:02.603 START TEST test_create_multi_ublk 00:26:02.603 ************************************ 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 [2024-12-09 23:07:28.815471] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:02.603 [2024-12-09 23:07:28.818395] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 [2024-12-09 23:07:29.102728] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:26:02.603 [2024-12-09 23:07:29.103304] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:26:02.603 [2024-12-09 23:07:29.103323] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:26:02.603 [2024-12-09 23:07:29.103339] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:26:02.603 [2024-12-09 23:07:29.111950] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:02.603 [2024-12-09 23:07:29.112010] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:02.603 [2024-12-09 23:07:29.118531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:02.603 [2024-12-09 23:07:29.119352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:26:02.603 [2024-12-09 23:07:29.129585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 [2024-12-09 23:07:29.437750] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:26:02.603 [2024-12-09 23:07:29.438269] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:26:02.603 [2024-12-09 23:07:29.438292] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:02.603 [2024-12-09 23:07:29.438302] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:26:02.603 [2024-12-09 23:07:29.445579] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:02.603 [2024-12-09 23:07:29.445623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:02.603 [2024-12-09 23:07:29.456531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:02.603 [2024-12-09 23:07:29.457403] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:26:02.603 [2024-12-09 23:07:29.480509] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.603 [2024-12-09 23:07:29.793659] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:26:02.603 [2024-12-09 23:07:29.794178] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:26:02.603 [2024-12-09 23:07:29.794197] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:26:02.603 [2024-12-09 23:07:29.794209] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:26:02.603 [2024-12-09 23:07:29.801545] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:02.603 [2024-12-09 23:07:29.801594] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:02.603 [2024-12-09 23:07:29.808513] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:02.603 [2024-12-09 23:07:29.809263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:26:02.603 [2024-12-09 23:07:29.817580] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.603 23:07:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:02.863 [2024-12-09 23:07:30.141727] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:26:02.863 [2024-12-09 23:07:30.142235] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:26:02.863 [2024-12-09 23:07:30.142257] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:26:02.863 [2024-12-09 23:07:30.142266] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:26:02.863 [2024-12-09 23:07:30.149550] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:02.863 [2024-12-09 23:07:30.149587] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:02.863 [2024-12-09 23:07:30.157582] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:02.863 [2024-12-09 23:07:30.158376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:26:02.863 [2024-12-09 23:07:30.162319] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.863 23:07:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:26:03.122 { 00:26:03.122 "ublk_device": "/dev/ublkb0", 00:26:03.122 "id": 0, 00:26:03.122 "queue_depth": 512, 00:26:03.122 "num_queues": 4, 00:26:03.122 "bdev_name": "Malloc0" 00:26:03.122 }, 00:26:03.122 { 00:26:03.122 "ublk_device": "/dev/ublkb1", 00:26:03.122 "id": 1, 00:26:03.122 "queue_depth": 512, 00:26:03.122 "num_queues": 4, 00:26:03.122 "bdev_name": "Malloc1" 00:26:03.122 }, 00:26:03.122 { 00:26:03.122 "ublk_device": "/dev/ublkb2", 00:26:03.122 "id": 2, 00:26:03.122 "queue_depth": 512, 00:26:03.122 "num_queues": 4, 00:26:03.122 "bdev_name": "Malloc2" 00:26:03.122 }, 00:26:03.122 { 00:26:03.122 "ublk_device": "/dev/ublkb3", 00:26:03.122 "id": 3, 00:26:03.122 "queue_depth": 512, 00:26:03.122 "num_queues": 4, 00:26:03.122 "bdev_name": "Malloc3" 00:26:03.122 } 00:26:03.122 ]' 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:03.122 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:03.381 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:26:03.641 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:26:03.642 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:03.642 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:26:03.642 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:26:03.642 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:26:03.900 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:26:03.900 23:07:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:03.900 [2024-12-09 23:07:31.158659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:26:03.900 [2024-12-09 23:07:31.201549] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:03.900 [2024-12-09 23:07:31.202610] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:26:03.900 [2024-12-09 23:07:31.210552] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:03.900 [2024-12-09 23:07:31.211019] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:26:03.900 [2024-12-09 23:07:31.211037] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.900 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:03.900 [2024-12-09 23:07:31.222675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:26:04.159 [2024-12-09 23:07:31.262091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:04.159 [2024-12-09 23:07:31.263068] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:26:04.159 [2024-12-09 23:07:31.271529] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:04.159 [2024-12-09 23:07:31.271855] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:26:04.159 [2024-12-09 23:07:31.271871] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 [2024-12-09 23:07:31.287660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:26:04.159 [2024-12-09 23:07:31.323111] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:04.159 [2024-12-09 23:07:31.324127] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:26:04.159 [2024-12-09 23:07:31.332609] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:04.159 [2024-12-09 23:07:31.332930] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:26:04.159 [2024-12-09 23:07:31.332946] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:04.159 [2024-12-09 23:07:31.350655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:26:04.159 [2024-12-09 23:07:31.388090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:26:04.159 [2024-12-09 23:07:31.388882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:26:04.159 [2024-12-09 23:07:31.398541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:26:04.159 [2024-12-09 23:07:31.398878] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:26:04.159 [2024-12-09 23:07:31.398901] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.159 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:26:04.418 [2024-12-09 23:07:31.622623] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:26:04.419 [2024-12-09 23:07:31.630485] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:26:04.419 [2024-12-09 23:07:31.630549] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:26:04.419 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:26:04.419 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:04.419 23:07:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:04.419 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.419 23:07:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.354 23:07:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.354 23:07:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:05.354 23:07:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:05.354 23:07:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.354 23:07:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.614 23:07:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.614 23:07:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:05.614 23:07:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:26:05.614 23:07:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.614 23:07:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:05.875 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.875 23:07:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:26:05.875 23:07:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:26:05.875 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.875 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:26:06.448 ************************************ 00:26:06.448 END TEST test_create_multi_ublk 00:26:06.448 ************************************ 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:26:06.448 00:26:06.448 real 0m4.786s 00:26:06.448 user 0m1.086s 00:26:06.448 sys 0m0.284s 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:06.448 23:07:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:26:06.448 23:07:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:06.448 23:07:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:26:06.448 23:07:33 ublk -- ublk/ublk.sh@130 -- # killprocess 75785 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@954 -- # '[' -z 75785 ']' 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@958 -- # kill -0 75785 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@959 -- # uname 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75785 00:26:06.448 killing process with pid 75785 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75785' 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@973 -- # kill 75785 00:26:06.448 23:07:33 ublk -- common/autotest_common.sh@978 -- # wait 75785 00:26:07.825 [2024-12-09 23:07:34.950756] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:26:07.825 [2024-12-09 23:07:34.950832] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:26:09.198 00:26:09.198 real 0m33.357s 00:26:09.198 user 0m46.459s 00:26:09.198 sys 0m11.380s 00:26:09.198 ************************************ 00:26:09.198 END TEST ublk 00:26:09.198 ************************************ 00:26:09.198 23:07:36 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.198 23:07:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:26:09.198 23:07:36 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:09.198 23:07:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.198 23:07:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.198 23:07:36 -- common/autotest_common.sh@10 -- # set +x 00:26:09.198 ************************************ 00:26:09.198 START TEST ublk_recovery 00:26:09.198 ************************************ 00:26:09.198 23:07:36 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:26:09.458 * Looking for test storage... 00:26:09.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:09.458 23:07:36 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:09.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.458 --rc genhtml_branch_coverage=1 00:26:09.458 --rc genhtml_function_coverage=1 00:26:09.458 --rc genhtml_legend=1 00:26:09.458 --rc geninfo_all_blocks=1 00:26:09.458 --rc geninfo_unexecuted_blocks=1 00:26:09.458 00:26:09.458 ' 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:09.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.458 --rc genhtml_branch_coverage=1 00:26:09.458 --rc genhtml_function_coverage=1 00:26:09.458 --rc genhtml_legend=1 00:26:09.458 --rc geninfo_all_blocks=1 00:26:09.458 --rc geninfo_unexecuted_blocks=1 00:26:09.458 00:26:09.458 ' 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:09.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.458 --rc genhtml_branch_coverage=1 00:26:09.458 --rc genhtml_function_coverage=1 00:26:09.458 --rc genhtml_legend=1 00:26:09.458 --rc geninfo_all_blocks=1 00:26:09.458 --rc geninfo_unexecuted_blocks=1 00:26:09.458 00:26:09.458 ' 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:09.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:09.458 --rc genhtml_branch_coverage=1 00:26:09.458 --rc genhtml_function_coverage=1 00:26:09.458 --rc genhtml_legend=1 00:26:09.458 --rc geninfo_all_blocks=1 00:26:09.458 --rc geninfo_unexecuted_blocks=1 00:26:09.458 00:26:09.458 ' 00:26:09.458 23:07:36 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:26:09.458 23:07:36 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:26:09.458 23:07:36 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:26:09.458 23:07:36 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76220 00:26:09.458 23:07:36 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:09.458 23:07:36 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:09.458 23:07:36 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76220 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76220 ']' 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:09.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:09.458 23:07:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:09.784 [2024-12-09 23:07:36.796297] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:26:09.784 [2024-12-09 23:07:36.796494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76220 ] 00:26:09.784 [2024-12-09 23:07:37.001166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:10.061 [2024-12-09 23:07:37.146051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.061 [2024-12-09 23:07:37.146069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.006 23:07:38 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.006 23:07:38 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:26:11.006 23:07:38 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:26:11.006 23:07:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.006 23:07:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.006 [2024-12-09 23:07:38.185481] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:11.006 [2024-12-09 23:07:38.188755] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:11.006 23:07:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.006 23:07:38 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:11.006 23:07:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.006 23:07:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.266 malloc0 00:26:11.266 23:07:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.266 23:07:38 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:26:11.266 23:07:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.266 23:07:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.266 [2024-12-09 23:07:38.350761] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:26:11.266 [2024-12-09 23:07:38.350911] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:26:11.266 [2024-12-09 23:07:38.350926] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:11.266 [2024-12-09 23:07:38.350936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:26:11.266 [2024-12-09 23:07:38.361688] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:26:11.266 [2024-12-09 23:07:38.361731] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:26:11.266 [2024-12-09 23:07:38.368526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:26:11.266 [2024-12-09 23:07:38.368728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:26:11.266 [2024-12-09 23:07:38.380525] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:26:11.266 1 00:26:11.266 23:07:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.266 23:07:38 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:26:12.203 23:07:39 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:26:12.203 23:07:39 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76266 00:26:12.203 23:07:39 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:26:12.203 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:12.203 fio-3.35 00:26:12.203 Starting 1 process 00:26:17.472 23:07:44 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76220 00:26:17.472 23:07:44 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:26:22.744 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76220 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:26:22.744 23:07:49 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:26:22.744 23:07:49 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76372 00:26:22.744 23:07:49 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:22.744 23:07:49 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76372 00:26:22.744 23:07:49 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76372 ']' 00:26:22.744 23:07:49 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.744 23:07:49 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.744 23:07:49 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.744 23:07:49 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.744 23:07:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.744 [2024-12-09 23:07:49.519485] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:26:22.744 [2024-12-09 23:07:49.519641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76372 ] 00:26:22.744 [2024-12-09 23:07:49.705467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:22.744 [2024-12-09 23:07:49.844670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.744 [2024-12-09 23:07:49.844705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.682 23:07:50 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:23.682 23:07:50 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:26:23.682 23:07:50 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:26:23.682 23:07:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.682 23:07:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.682 [2024-12-09 23:07:50.911514] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:26:23.682 [2024-12-09 23:07:50.915166] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:26:23.682 23:07:50 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.682 23:07:50 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:26:23.682 23:07:50 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.682 23:07:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.941 malloc0 00:26:23.941 23:07:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.941 23:07:51 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:26:23.941 23:07:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.941 23:07:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.941 [2024-12-09 23:07:51.071732] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:26:23.941 [2024-12-09 23:07:51.071793] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:26:23.941 [2024-12-09 23:07:51.071807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:26:23.941 [2024-12-09 23:07:51.078574] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:26:23.941 [2024-12-09 23:07:51.078619] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:26:23.941 1 00:26:23.941 23:07:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.941 23:07:51 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76266 00:26:24.878 [2024-12-09 23:07:52.077515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:26:24.878 [2024-12-09 23:07:52.085582] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:26:24.878 [2024-12-09 23:07:52.085642] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:26:25.818 [2024-12-09 23:07:53.084101] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:26:25.818 [2024-12-09 23:07:53.093533] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:26:25.818 [2024-12-09 23:07:53.093586] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:26:26.777 [2024-12-09 23:07:54.092530] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:26:26.777 [2024-12-09 23:07:54.100536] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:26:26.777 [2024-12-09 23:07:54.100572] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:26:26.777 [2024-12-09 23:07:54.100587] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:26:26.777 [2024-12-09 23:07:54.100733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:26:48.786 [2024-12-09 23:08:14.555545] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:26:48.786 [2024-12-09 23:08:14.562004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:26:48.786 [2024-12-09 23:08:14.567936] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:26:48.786 [2024-12-09 23:08:14.567981] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:27:15.336 00:27:15.336 fio_test: (groupid=0, jobs=1): err= 0: pid=76269: Mon Dec 9 23:08:39 2024 00:27:15.336 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(2588MiB/60002msec) 00:27:15.336 slat (nsec): min=1854, max=1476.0k, avg=8135.53, stdev=4003.49 00:27:15.336 clat (usec): min=1222, max=30187k, avg=5517.15, stdev=284825.97 00:27:15.336 lat (usec): min=1226, max=30187k, avg=5525.29, stdev=284825.98 00:27:15.336 clat percentiles (usec): 00:27:15.336 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:27:15.336 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2638], 00:27:15.336 | 70.00th=[ 2737], 80.00th=[ 3163], 90.00th=[ 3425], 95.00th=[ 4178], 00:27:15.336 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 8848], 99.95th=[12911], 00:27:15.336 | 99.99th=[13829] 00:27:15.336 bw ( KiB/s): min= 8143, max=98728, per=100.00%, avg=86964.58, stdev=16080.53, samples=60 00:27:15.336 iops : min= 2035, max=24682, avg=21741.10, stdev=4020.20, samples=60 00:27:15.336 write: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(2585MiB/60002msec); 0 zone resets 00:27:15.336 slat (nsec): min=1908, max=678686, avg=8205.14, stdev=3539.88 00:27:15.336 clat (usec): min=1140, max=30187k, avg=6065.09, stdev=308181.34 00:27:15.336 lat (usec): min=1148, max=30187k, avg=6073.29, stdev=308181.35 00:27:15.336 clat percentiles (msec): 00:27:15.336 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:27:15.336 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:27:15.336 | 70.00th=[ 3], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:27:15.336 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 10], 99.95th=[ 13], 00:27:15.336 | 99.99th=[17113] 00:27:15.336 bw ( KiB/s): min= 7729, max=99064, per=100.00%, avg=86893.37, stdev=16091.45, samples=60 00:27:15.336 iops : min= 1932, max=24766, avg=21723.28, stdev=4022.90, samples=60 00:27:15.336 lat (msec) : 2=0.36%, 4=93.70%, 10=5.87%, 20=0.06%, >=2000=0.01% 00:27:15.336 cpu : usr=6.82%, sys=17.95%, ctx=56942, majf=0, minf=13 00:27:15.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:27:15.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:15.336 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:15.336 issued rwts: total=662438,661733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:15.336 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:15.336 00:27:15.336 Run status group 0 (all jobs): 00:27:15.336 READ: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=2588MiB (2713MB), run=60002-60002msec 00:27:15.336 WRITE: bw=43.1MiB/s (45.2MB/s), 43.1MiB/s-43.1MiB/s (45.2MB/s-45.2MB/s), io=2585MiB (2710MB), run=60002-60002msec 00:27:15.336 00:27:15.336 Disk stats (read/write): 00:27:15.336 ublkb1: ios=659811/659171, merge=0/0, ticks=3584918/3869925, in_queue=7454843, util=99.97% 00:27:15.336 23:08:39 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.336 [2024-12-09 23:08:39.672136] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:27:15.336 [2024-12-09 23:08:39.718619] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:15.336 [2024-12-09 23:08:39.718924] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:27:15.336 [2024-12-09 23:08:39.726526] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:15.336 [2024-12-09 23:08:39.726767] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:27:15.336 [2024-12-09 23:08:39.726782] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.336 23:08:39 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.336 [2024-12-09 23:08:39.741665] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:15.336 [2024-12-09 23:08:39.750504] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:27:15.336 [2024-12-09 23:08:39.750596] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.336 23:08:39 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:27:15.336 23:08:39 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:27:15.336 23:08:39 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76372 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76372 ']' 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76372 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76372 00:27:15.336 killing process with pid 76372 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76372' 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76372 00:27:15.336 23:08:39 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76372 00:27:15.336 [2024-12-09 23:08:41.523687] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:15.336 [2024-12-09 23:08:41.523772] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:27:15.905 00:27:15.905 real 1m6.659s 00:27:15.905 user 1m51.897s 00:27:15.905 sys 0m25.334s 00:27:15.905 23:08:43 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.905 ************************************ 00:27:15.905 END TEST ublk_recovery 00:27:15.905 ************************************ 00:27:15.905 23:08:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:15.905 23:08:43 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:27:15.905 23:08:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:27:15.905 23:08:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.905 23:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:15.905 23:08:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:27:15.905 23:08:43 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:15.905 23:08:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:15.905 23:08:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.905 23:08:43 -- common/autotest_common.sh@10 -- # set +x 00:27:15.905 ************************************ 00:27:15.905 START TEST ftl 00:27:15.905 ************************************ 00:27:15.905 23:08:43 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:16.163 * Looking for test storage... 00:27:16.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.163 23:08:43 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.163 23:08:43 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.163 23:08:43 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.163 23:08:43 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.163 23:08:43 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.163 23:08:43 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.163 23:08:43 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.163 23:08:43 ftl -- scripts/common.sh@344 -- # case "$op" in 00:27:16.163 23:08:43 ftl -- scripts/common.sh@345 -- # : 1 00:27:16.163 23:08:43 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.163 23:08:43 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.163 23:08:43 ftl -- scripts/common.sh@365 -- # decimal 1 00:27:16.163 23:08:43 ftl -- scripts/common.sh@353 -- # local d=1 00:27:16.163 23:08:43 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.163 23:08:43 ftl -- scripts/common.sh@355 -- # echo 1 00:27:16.163 23:08:43 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.163 23:08:43 ftl -- scripts/common.sh@366 -- # decimal 2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@353 -- # local d=2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.163 23:08:43 ftl -- scripts/common.sh@355 -- # echo 2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.163 23:08:43 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.163 23:08:43 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.163 23:08:43 ftl -- scripts/common.sh@368 -- # return 0 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:16.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.163 --rc genhtml_branch_coverage=1 00:27:16.163 --rc genhtml_function_coverage=1 00:27:16.163 --rc genhtml_legend=1 00:27:16.163 --rc geninfo_all_blocks=1 00:27:16.163 --rc geninfo_unexecuted_blocks=1 00:27:16.163 00:27:16.163 ' 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:16.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.163 --rc genhtml_branch_coverage=1 00:27:16.163 --rc genhtml_function_coverage=1 00:27:16.163 --rc genhtml_legend=1 00:27:16.163 --rc geninfo_all_blocks=1 00:27:16.163 --rc geninfo_unexecuted_blocks=1 00:27:16.163 00:27:16.163 ' 00:27:16.163 23:08:43 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:16.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.164 --rc genhtml_branch_coverage=1 00:27:16.164 --rc genhtml_function_coverage=1 00:27:16.164 --rc genhtml_legend=1 00:27:16.164 --rc geninfo_all_blocks=1 00:27:16.164 --rc geninfo_unexecuted_blocks=1 00:27:16.164 00:27:16.164 ' 00:27:16.164 23:08:43 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:16.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.164 --rc genhtml_branch_coverage=1 00:27:16.164 --rc genhtml_function_coverage=1 00:27:16.164 --rc genhtml_legend=1 00:27:16.164 --rc geninfo_all_blocks=1 00:27:16.164 --rc geninfo_unexecuted_blocks=1 00:27:16.164 00:27:16.164 ' 00:27:16.164 23:08:43 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:16.164 23:08:43 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:27:16.164 23:08:43 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:16.164 23:08:43 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:16.164 23:08:43 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:16.164 23:08:43 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:16.164 23:08:43 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:16.164 23:08:43 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:16.164 23:08:43 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:16.164 23:08:43 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:16.164 23:08:43 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:16.164 23:08:43 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:16.164 23:08:43 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:16.164 23:08:43 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:16.164 23:08:43 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:16.164 23:08:43 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:16.164 23:08:43 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:16.164 23:08:43 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:16.164 23:08:43 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:16.164 23:08:43 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:16.164 23:08:43 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:16.164 23:08:43 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:16.164 23:08:43 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:16.164 23:08:43 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:16.164 23:08:43 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:16.164 23:08:43 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:16.164 23:08:43 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:16.164 23:08:43 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.164 23:08:43 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.164 23:08:43 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:16.164 23:08:43 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:27:16.164 23:08:43 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:27:16.164 23:08:43 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:27:16.164 23:08:43 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:27:16.164 23:08:43 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:16.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:16.992 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:16.992 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:16.992 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:16.992 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:17.256 23:08:44 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77179 00:27:17.256 23:08:44 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:27:17.256 23:08:44 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77179 00:27:17.256 23:08:44 ftl -- common/autotest_common.sh@835 -- # '[' -z 77179 ']' 00:27:17.256 23:08:44 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.256 23:08:44 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:17.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.256 23:08:44 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.256 23:08:44 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:17.256 23:08:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:17.256 [2024-12-09 23:08:44.496301] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:27:17.256 [2024-12-09 23:08:44.496477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77179 ] 00:27:17.518 [2024-12-09 23:08:44.683067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.518 [2024-12-09 23:08:44.814102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.086 23:08:45 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.086 23:08:45 ftl -- common/autotest_common.sh@868 -- # return 0 00:27:18.086 23:08:45 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:27:18.346 23:08:45 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:27:19.737 23:08:46 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:27:19.737 23:08:46 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:19.995 23:08:47 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:27:19.995 23:08:47 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:19.995 23:08:47 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@50 -- # break 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:27:20.252 23:08:47 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:27:20.510 23:08:47 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:27:20.510 23:08:47 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:27:20.510 23:08:47 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:27:20.510 23:08:47 ftl -- ftl/ftl.sh@63 -- # break 00:27:20.510 23:08:47 ftl -- ftl/ftl.sh@66 -- # killprocess 77179 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@954 -- # '[' -z 77179 ']' 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@958 -- # kill -0 77179 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@959 -- # uname 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77179 00:27:20.510 killing process with pid 77179 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:20.510 23:08:47 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77179' 00:27:20.511 23:08:47 ftl -- common/autotest_common.sh@973 -- # kill 77179 00:27:20.511 23:08:47 ftl -- common/autotest_common.sh@978 -- # wait 77179 00:27:23.042 23:08:50 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:27:23.043 23:08:50 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:23.043 23:08:50 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:23.043 23:08:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.043 23:08:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:23.043 ************************************ 00:27:23.043 START TEST ftl_fio_basic 00:27:23.043 ************************************ 00:27:23.043 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:27:23.043 * Looking for test storage... 00:27:23.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:23.043 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:23.043 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:27:23.043 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.302 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:23.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.303 --rc genhtml_branch_coverage=1 00:27:23.303 --rc genhtml_function_coverage=1 00:27:23.303 --rc genhtml_legend=1 00:27:23.303 --rc geninfo_all_blocks=1 00:27:23.303 --rc geninfo_unexecuted_blocks=1 00:27:23.303 00:27:23.303 ' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:23.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.303 --rc genhtml_branch_coverage=1 00:27:23.303 --rc genhtml_function_coverage=1 00:27:23.303 --rc genhtml_legend=1 00:27:23.303 --rc geninfo_all_blocks=1 00:27:23.303 --rc geninfo_unexecuted_blocks=1 00:27:23.303 00:27:23.303 ' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:23.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.303 --rc genhtml_branch_coverage=1 00:27:23.303 --rc genhtml_function_coverage=1 00:27:23.303 --rc genhtml_legend=1 00:27:23.303 --rc geninfo_all_blocks=1 00:27:23.303 --rc geninfo_unexecuted_blocks=1 00:27:23.303 00:27:23.303 ' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:23.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.303 --rc genhtml_branch_coverage=1 00:27:23.303 --rc genhtml_function_coverage=1 00:27:23.303 --rc genhtml_legend=1 00:27:23.303 --rc geninfo_all_blocks=1 00:27:23.303 --rc geninfo_unexecuted_blocks=1 00:27:23.303 00:27:23.303 ' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:27:23.303 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77328 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77328 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77328 ']' 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.304 23:08:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:23.304 [2024-12-09 23:08:50.578648] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:27:23.304 [2024-12-09 23:08:50.578799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77328 ] 00:27:23.563 [2024-12-09 23:08:50.766793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:23.822 [2024-12-09 23:08:50.910830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.822 [2024-12-09 23:08:50.910930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.822 [2024-12-09 23:08:50.910959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:27:24.761 23:08:51 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:27:25.020 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:25.280 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:25.280 { 00:27:25.280 "name": "nvme0n1", 00:27:25.280 "aliases": [ 00:27:25.280 "0c367a80-93a0-4034-a1df-c5e21fae136e" 00:27:25.280 ], 00:27:25.280 "product_name": "NVMe disk", 00:27:25.280 "block_size": 4096, 00:27:25.280 "num_blocks": 1310720, 00:27:25.280 "uuid": "0c367a80-93a0-4034-a1df-c5e21fae136e", 00:27:25.280 "numa_id": -1, 00:27:25.280 "assigned_rate_limits": { 00:27:25.280 "rw_ios_per_sec": 0, 00:27:25.280 "rw_mbytes_per_sec": 0, 00:27:25.280 "r_mbytes_per_sec": 0, 00:27:25.280 "w_mbytes_per_sec": 0 00:27:25.280 }, 00:27:25.280 "claimed": false, 00:27:25.280 "zoned": false, 00:27:25.280 "supported_io_types": { 00:27:25.280 "read": true, 00:27:25.280 "write": true, 00:27:25.280 "unmap": true, 00:27:25.280 "flush": true, 00:27:25.280 "reset": true, 00:27:25.280 "nvme_admin": true, 00:27:25.280 "nvme_io": true, 00:27:25.280 "nvme_io_md": false, 00:27:25.280 "write_zeroes": true, 00:27:25.280 "zcopy": false, 00:27:25.280 "get_zone_info": false, 00:27:25.280 "zone_management": false, 00:27:25.280 "zone_append": false, 00:27:25.280 "compare": true, 00:27:25.280 "compare_and_write": false, 00:27:25.280 "abort": true, 00:27:25.280 "seek_hole": false, 00:27:25.280 "seek_data": false, 00:27:25.280 "copy": true, 00:27:25.280 "nvme_iov_md": false 00:27:25.280 }, 00:27:25.280 "driver_specific": { 00:27:25.280 "nvme": [ 00:27:25.280 { 00:27:25.280 "pci_address": "0000:00:11.0", 00:27:25.280 "trid": { 00:27:25.280 "trtype": "PCIe", 00:27:25.280 "traddr": "0000:00:11.0" 00:27:25.280 }, 00:27:25.280 "ctrlr_data": { 00:27:25.280 "cntlid": 0, 00:27:25.280 "vendor_id": "0x1b36", 00:27:25.280 "model_number": "QEMU NVMe Ctrl", 00:27:25.280 "serial_number": "12341", 00:27:25.280 "firmware_revision": "8.0.0", 00:27:25.280 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:25.280 "oacs": { 00:27:25.280 "security": 0, 00:27:25.280 "format": 1, 00:27:25.280 "firmware": 0, 00:27:25.280 "ns_manage": 1 00:27:25.280 }, 00:27:25.280 "multi_ctrlr": false, 00:27:25.280 "ana_reporting": false 00:27:25.280 }, 00:27:25.280 "vs": { 00:27:25.280 "nvme_version": "1.4" 00:27:25.280 }, 00:27:25.280 "ns_data": { 00:27:25.280 "id": 1, 00:27:25.280 "can_share": false 00:27:25.280 } 00:27:25.280 } 00:27:25.280 ], 00:27:25.280 "mp_policy": "active_passive" 00:27:25.280 } 00:27:25.280 } 00:27:25.280 ]' 00:27:25.280 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:25.280 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:27:25.280 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:25.280 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:25.280 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:25.280 23:08:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:27:25.281 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:27:25.281 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:25.281 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:27:25.281 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:25.281 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:25.540 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:27:25.540 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:25.800 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=26247e5b-f9a4-4077-b604-73b5a0ccf10d 00:27:25.800 23:08:52 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 26247e5b-f9a4-4077-b604-73b5a0ccf10d 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:27:26.059 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:26.318 { 00:27:26.318 "name": "7bf3c798-9ba4-4c78-81ee-65437a88c338", 00:27:26.318 "aliases": [ 00:27:26.318 "lvs/nvme0n1p0" 00:27:26.318 ], 00:27:26.318 "product_name": "Logical Volume", 00:27:26.318 "block_size": 4096, 00:27:26.318 "num_blocks": 26476544, 00:27:26.318 "uuid": "7bf3c798-9ba4-4c78-81ee-65437a88c338", 00:27:26.318 "assigned_rate_limits": { 00:27:26.318 "rw_ios_per_sec": 0, 00:27:26.318 "rw_mbytes_per_sec": 0, 00:27:26.318 "r_mbytes_per_sec": 0, 00:27:26.318 "w_mbytes_per_sec": 0 00:27:26.318 }, 00:27:26.318 "claimed": false, 00:27:26.318 "zoned": false, 00:27:26.318 "supported_io_types": { 00:27:26.318 "read": true, 00:27:26.318 "write": true, 00:27:26.318 "unmap": true, 00:27:26.318 "flush": false, 00:27:26.318 "reset": true, 00:27:26.318 "nvme_admin": false, 00:27:26.318 "nvme_io": false, 00:27:26.318 "nvme_io_md": false, 00:27:26.318 "write_zeroes": true, 00:27:26.318 "zcopy": false, 00:27:26.318 "get_zone_info": false, 00:27:26.318 "zone_management": false, 00:27:26.318 "zone_append": false, 00:27:26.318 "compare": false, 00:27:26.318 "compare_and_write": false, 00:27:26.318 "abort": false, 00:27:26.318 "seek_hole": true, 00:27:26.318 "seek_data": true, 00:27:26.318 "copy": false, 00:27:26.318 "nvme_iov_md": false 00:27:26.318 }, 00:27:26.318 "driver_specific": { 00:27:26.318 "lvol": { 00:27:26.318 "lvol_store_uuid": "26247e5b-f9a4-4077-b604-73b5a0ccf10d", 00:27:26.318 "base_bdev": "nvme0n1", 00:27:26.318 "thin_provision": true, 00:27:26.318 "num_allocated_clusters": 0, 00:27:26.318 "snapshot": false, 00:27:26.318 "clone": false, 00:27:26.318 "esnap_clone": false 00:27:26.318 } 00:27:26.318 } 00:27:26.318 } 00:27:26.318 ]' 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:27:26.318 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:27:26.577 23:08:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:26.836 { 00:27:26.836 "name": "7bf3c798-9ba4-4c78-81ee-65437a88c338", 00:27:26.836 "aliases": [ 00:27:26.836 "lvs/nvme0n1p0" 00:27:26.836 ], 00:27:26.836 "product_name": "Logical Volume", 00:27:26.836 "block_size": 4096, 00:27:26.836 "num_blocks": 26476544, 00:27:26.836 "uuid": "7bf3c798-9ba4-4c78-81ee-65437a88c338", 00:27:26.836 "assigned_rate_limits": { 00:27:26.836 "rw_ios_per_sec": 0, 00:27:26.836 "rw_mbytes_per_sec": 0, 00:27:26.836 "r_mbytes_per_sec": 0, 00:27:26.836 "w_mbytes_per_sec": 0 00:27:26.836 }, 00:27:26.836 "claimed": false, 00:27:26.836 "zoned": false, 00:27:26.836 "supported_io_types": { 00:27:26.836 "read": true, 00:27:26.836 "write": true, 00:27:26.836 "unmap": true, 00:27:26.836 "flush": false, 00:27:26.836 "reset": true, 00:27:26.836 "nvme_admin": false, 00:27:26.836 "nvme_io": false, 00:27:26.836 "nvme_io_md": false, 00:27:26.836 "write_zeroes": true, 00:27:26.836 "zcopy": false, 00:27:26.836 "get_zone_info": false, 00:27:26.836 "zone_management": false, 00:27:26.836 "zone_append": false, 00:27:26.836 "compare": false, 00:27:26.836 "compare_and_write": false, 00:27:26.836 "abort": false, 00:27:26.836 "seek_hole": true, 00:27:26.836 "seek_data": true, 00:27:26.836 "copy": false, 00:27:26.836 "nvme_iov_md": false 00:27:26.836 }, 00:27:26.836 "driver_specific": { 00:27:26.836 "lvol": { 00:27:26.836 "lvol_store_uuid": "26247e5b-f9a4-4077-b604-73b5a0ccf10d", 00:27:26.836 "base_bdev": "nvme0n1", 00:27:26.836 "thin_provision": true, 00:27:26.836 "num_allocated_clusters": 0, 00:27:26.836 "snapshot": false, 00:27:26.836 "clone": false, 00:27:26.836 "esnap_clone": false 00:27:26.836 } 00:27:26.836 } 00:27:26.836 } 00:27:26.836 ]' 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:27:26.836 23:08:54 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:27:27.095 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:27:27.095 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7bf3c798-9ba4-4c78-81ee-65437a88c338 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:27.358 { 00:27:27.358 "name": "7bf3c798-9ba4-4c78-81ee-65437a88c338", 00:27:27.358 "aliases": [ 00:27:27.358 "lvs/nvme0n1p0" 00:27:27.358 ], 00:27:27.358 "product_name": "Logical Volume", 00:27:27.358 "block_size": 4096, 00:27:27.358 "num_blocks": 26476544, 00:27:27.358 "uuid": "7bf3c798-9ba4-4c78-81ee-65437a88c338", 00:27:27.358 "assigned_rate_limits": { 00:27:27.358 "rw_ios_per_sec": 0, 00:27:27.358 "rw_mbytes_per_sec": 0, 00:27:27.358 "r_mbytes_per_sec": 0, 00:27:27.358 "w_mbytes_per_sec": 0 00:27:27.358 }, 00:27:27.358 "claimed": false, 00:27:27.358 "zoned": false, 00:27:27.358 "supported_io_types": { 00:27:27.358 "read": true, 00:27:27.358 "write": true, 00:27:27.358 "unmap": true, 00:27:27.358 "flush": false, 00:27:27.358 "reset": true, 00:27:27.358 "nvme_admin": false, 00:27:27.358 "nvme_io": false, 00:27:27.358 "nvme_io_md": false, 00:27:27.358 "write_zeroes": true, 00:27:27.358 "zcopy": false, 00:27:27.358 "get_zone_info": false, 00:27:27.358 "zone_management": false, 00:27:27.358 "zone_append": false, 00:27:27.358 "compare": false, 00:27:27.358 "compare_and_write": false, 00:27:27.358 "abort": false, 00:27:27.358 "seek_hole": true, 00:27:27.358 "seek_data": true, 00:27:27.358 "copy": false, 00:27:27.358 "nvme_iov_md": false 00:27:27.358 }, 00:27:27.358 "driver_specific": { 00:27:27.358 "lvol": { 00:27:27.358 "lvol_store_uuid": "26247e5b-f9a4-4077-b604-73b5a0ccf10d", 00:27:27.358 "base_bdev": "nvme0n1", 00:27:27.358 "thin_provision": true, 00:27:27.358 "num_allocated_clusters": 0, 00:27:27.358 "snapshot": false, 00:27:27.358 "clone": false, 00:27:27.358 "esnap_clone": false 00:27:27.358 } 00:27:27.358 } 00:27:27.358 } 00:27:27.358 ]' 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:27:27.358 23:08:54 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7bf3c798-9ba4-4c78-81ee-65437a88c338 -c nvc0n1p0 --l2p_dram_limit 60 00:27:27.617 [2024-12-09 23:08:54.842721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.842796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:27.617 [2024-12-09 23:08:54.842817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:27.617 [2024-12-09 23:08:54.842830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.842918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.842935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:27.617 [2024-12-09 23:08:54.842951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:27.617 [2024-12-09 23:08:54.842962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.843012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:27.617 [2024-12-09 23:08:54.844141] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:27.617 [2024-12-09 23:08:54.844178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.844190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:27.617 [2024-12-09 23:08:54.844204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:27:27.617 [2024-12-09 23:08:54.844214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.844376] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 79e84b50-2cb6-48b3-ac39-7bdeee73bfb4 00:27:27.617 [2024-12-09 23:08:54.846236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.846442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:27.617 [2024-12-09 23:08:54.846482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:27.617 [2024-12-09 23:08:54.846496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.859099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.859179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:27.617 [2024-12-09 23:08:54.859195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:27:27.617 [2024-12-09 23:08:54.859209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.859369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.859389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:27.617 [2024-12-09 23:08:54.859401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:27:27.617 [2024-12-09 23:08:54.859419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.859558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.859576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:27.617 [2024-12-09 23:08:54.859588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:27.617 [2024-12-09 23:08:54.859601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.859636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:27.617 [2024-12-09 23:08:54.865157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.865381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:27.617 [2024-12-09 23:08:54.865414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.533 ms 00:27:27.617 [2024-12-09 23:08:54.865429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.865524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.865541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:27.617 [2024-12-09 23:08:54.865555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:27.617 [2024-12-09 23:08:54.865566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.865627] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:27.617 [2024-12-09 23:08:54.865795] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:27.617 [2024-12-09 23:08:54.865818] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:27.617 [2024-12-09 23:08:54.865834] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:27.617 [2024-12-09 23:08:54.865851] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:27.617 [2024-12-09 23:08:54.865864] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:27.617 [2024-12-09 23:08:54.865880] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:27.617 [2024-12-09 23:08:54.865891] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:27.617 [2024-12-09 23:08:54.865903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:27.617 [2024-12-09 23:08:54.865913] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:27.617 [2024-12-09 23:08:54.865927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.865940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:27.617 [2024-12-09 23:08:54.865954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:27:27.617 [2024-12-09 23:08:54.865964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.866052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.617 [2024-12-09 23:08:54.866064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:27.617 [2024-12-09 23:08:54.866078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:27:27.617 [2024-12-09 23:08:54.866088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.617 [2024-12-09 23:08:54.866198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:27.617 [2024-12-09 23:08:54.866211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:27.617 [2024-12-09 23:08:54.866228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:27.617 [2024-12-09 23:08:54.866263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:27.617 [2024-12-09 23:08:54.866298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.617 [2024-12-09 23:08:54.866324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:27.617 [2024-12-09 23:08:54.866334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:27.617 [2024-12-09 23:08:54.866347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:27.617 [2024-12-09 23:08:54.866357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:27.617 [2024-12-09 23:08:54.866370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:27.617 [2024-12-09 23:08:54.866379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:27.617 [2024-12-09 23:08:54.866403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:27.617 [2024-12-09 23:08:54.866437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:27.617 [2024-12-09 23:08:54.866488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:27.617 [2024-12-09 23:08:54.866522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:27.617 [2024-12-09 23:08:54.866553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:27.617 [2024-12-09 23:08:54.866589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.617 [2024-12-09 23:08:54.866631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:27.617 [2024-12-09 23:08:54.866641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:27.617 [2024-12-09 23:08:54.866654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:27.617 [2024-12-09 23:08:54.866663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:27.617 [2024-12-09 23:08:54.866675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:27.617 [2024-12-09 23:08:54.866684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:27.617 [2024-12-09 23:08:54.866708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:27.617 [2024-12-09 23:08:54.866721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866730] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:27.617 [2024-12-09 23:08:54.866743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:27.617 [2024-12-09 23:08:54.866753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:27.617 [2024-12-09 23:08:54.866777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:27.617 [2024-12-09 23:08:54.866792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:27.617 [2024-12-09 23:08:54.866801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:27.617 [2024-12-09 23:08:54.866814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:27.617 [2024-12-09 23:08:54.866823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:27.617 [2024-12-09 23:08:54.866835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:27.617 [2024-12-09 23:08:54.866847] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:27.617 [2024-12-09 23:08:54.866862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.617 [2024-12-09 23:08:54.866874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:27.617 [2024-12-09 23:08:54.866887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:27.617 [2024-12-09 23:08:54.866897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:27.617 [2024-12-09 23:08:54.866910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:27.617 [2024-12-09 23:08:54.866922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:27.617 [2024-12-09 23:08:54.866937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:27.617 [2024-12-09 23:08:54.866947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:27.617 [2024-12-09 23:08:54.866960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:27.618 [2024-12-09 23:08:54.866971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:27.618 [2024-12-09 23:08:54.866986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:27.618 [2024-12-09 23:08:54.866997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:27.618 [2024-12-09 23:08:54.867011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:27.618 [2024-12-09 23:08:54.867022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:27.618 [2024-12-09 23:08:54.867034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:27.618 [2024-12-09 23:08:54.867045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:27.618 [2024-12-09 23:08:54.867072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:27.618 [2024-12-09 23:08:54.867090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:27.618 [2024-12-09 23:08:54.867104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:27.618 [2024-12-09 23:08:54.867115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:27.618 [2024-12-09 23:08:54.867129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:27.618 [2024-12-09 23:08:54.867142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.618 [2024-12-09 23:08:54.867155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:27.618 [2024-12-09 23:08:54.867166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:27:27.618 [2024-12-09 23:08:54.867179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.618 [2024-12-09 23:08:54.867255] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:27.618 [2024-12-09 23:08:54.867274] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:31.812 [2024-12-09 23:08:58.993630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:58.993965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:31.812 [2024-12-09 23:08:58.993996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4133.073 ms 00:27:31.812 [2024-12-09 23:08:58.994011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.812 [2024-12-09 23:08:59.034719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:59.035044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:31.812 [2024-12-09 23:08:59.035074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.432 ms 00:27:31.812 [2024-12-09 23:08:59.035089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.812 [2024-12-09 23:08:59.035269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:59.035285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:31.812 [2024-12-09 23:08:59.035298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:31.812 [2024-12-09 23:08:59.035315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.812 [2024-12-09 23:08:59.103727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:59.103807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:31.812 [2024-12-09 23:08:59.103829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.467 ms 00:27:31.812 [2024-12-09 23:08:59.103845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.812 [2024-12-09 23:08:59.103909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:59.103936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:31.812 [2024-12-09 23:08:59.103949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:31.812 [2024-12-09 23:08:59.103963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.812 [2024-12-09 23:08:59.104857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:59.104880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:31.812 [2024-12-09 23:08:59.104892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:27:31.812 [2024-12-09 23:08:59.104910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.812 [2024-12-09 23:08:59.105050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:59.105069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:31.812 [2024-12-09 23:08:59.105080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:27:31.812 [2024-12-09 23:08:59.105096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.812 [2024-12-09 23:08:59.132311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.812 [2024-12-09 23:08:59.132377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:31.812 [2024-12-09 23:08:59.132395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.228 ms 00:27:31.812 [2024-12-09 23:08:59.132408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.071 [2024-12-09 23:08:59.151032] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:32.071 [2024-12-09 23:08:59.178063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.071 [2024-12-09 23:08:59.178136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:32.071 [2024-12-09 23:08:59.178160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.551 ms 00:27:32.071 [2024-12-09 23:08:59.178171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.071 [2024-12-09 23:08:59.268866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.071 [2024-12-09 23:08:59.268943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:32.071 [2024-12-09 23:08:59.268964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.761 ms 00:27:32.071 [2024-12-09 23:08:59.268976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.071 [2024-12-09 23:08:59.269231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.071 [2024-12-09 23:08:59.269247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:32.072 [2024-12-09 23:08:59.269267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:27:32.072 [2024-12-09 23:08:59.269278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.072 [2024-12-09 23:08:59.311991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.072 [2024-12-09 23:08:59.312066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:32.072 [2024-12-09 23:08:59.312089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.674 ms 00:27:32.072 [2024-12-09 23:08:59.312101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.072 [2024-12-09 23:08:59.354348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.072 [2024-12-09 23:08:59.354436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:32.072 [2024-12-09 23:08:59.354480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.215 ms 00:27:32.072 [2024-12-09 23:08:59.354492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.072 [2024-12-09 23:08:59.355254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.072 [2024-12-09 23:08:59.355292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:32.072 [2024-12-09 23:08:59.355307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:27:32.072 [2024-12-09 23:08:59.355318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.331 [2024-12-09 23:08:59.487357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.331 [2024-12-09 23:08:59.487491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:32.331 [2024-12-09 23:08:59.487525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 132.135 ms 00:27:32.331 [2024-12-09 23:08:59.487537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.331 [2024-12-09 23:08:59.533599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.331 [2024-12-09 23:08:59.533934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:32.331 [2024-12-09 23:08:59.533970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.935 ms 00:27:32.331 [2024-12-09 23:08:59.533982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.331 [2024-12-09 23:08:59.576740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.331 [2024-12-09 23:08:59.577081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:32.331 [2024-12-09 23:08:59.577117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.731 ms 00:27:32.331 [2024-12-09 23:08:59.577128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.331 [2024-12-09 23:08:59.620071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.331 [2024-12-09 23:08:59.620406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:32.331 [2024-12-09 23:08:59.620438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.918 ms 00:27:32.331 [2024-12-09 23:08:59.620467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.331 [2024-12-09 23:08:59.620557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.331 [2024-12-09 23:08:59.620570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:32.331 [2024-12-09 23:08:59.620595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:32.331 [2024-12-09 23:08:59.620605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.331 [2024-12-09 23:08:59.620793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.331 [2024-12-09 23:08:59.620815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:32.331 [2024-12-09 23:08:59.620829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:32.331 [2024-12-09 23:08:59.620840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.331 [2024-12-09 23:08:59.622376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4786.899 ms, result 0 00:27:32.331 { 00:27:32.331 "name": "ftl0", 00:27:32.331 "uuid": "79e84b50-2cb6-48b3-ac39-7bdeee73bfb4" 00:27:32.331 } 00:27:32.331 23:08:59 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:27:32.331 23:08:59 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:27:32.331 23:08:59 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:32.331 23:08:59 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:27:32.331 23:08:59 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:32.331 23:08:59 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:32.331 23:08:59 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:32.592 23:08:59 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:32.851 [ 00:27:32.851 { 00:27:32.851 "name": "ftl0", 00:27:32.851 "aliases": [ 00:27:32.851 "79e84b50-2cb6-48b3-ac39-7bdeee73bfb4" 00:27:32.851 ], 00:27:32.851 "product_name": "FTL disk", 00:27:32.851 "block_size": 4096, 00:27:32.851 "num_blocks": 20971520, 00:27:32.851 "uuid": "79e84b50-2cb6-48b3-ac39-7bdeee73bfb4", 00:27:32.851 "assigned_rate_limits": { 00:27:32.851 "rw_ios_per_sec": 0, 00:27:32.851 "rw_mbytes_per_sec": 0, 00:27:32.851 "r_mbytes_per_sec": 0, 00:27:32.851 "w_mbytes_per_sec": 0 00:27:32.851 }, 00:27:32.851 "claimed": false, 00:27:32.851 "zoned": false, 00:27:32.851 "supported_io_types": { 00:27:32.851 "read": true, 00:27:32.851 "write": true, 00:27:32.851 "unmap": true, 00:27:32.851 "flush": true, 00:27:32.852 "reset": false, 00:27:32.852 "nvme_admin": false, 00:27:32.852 "nvme_io": false, 00:27:32.852 "nvme_io_md": false, 00:27:32.852 "write_zeroes": true, 00:27:32.852 "zcopy": false, 00:27:32.852 "get_zone_info": false, 00:27:32.852 "zone_management": false, 00:27:32.852 "zone_append": false, 00:27:32.852 "compare": false, 00:27:32.852 "compare_and_write": false, 00:27:32.852 "abort": false, 00:27:32.852 "seek_hole": false, 00:27:32.852 "seek_data": false, 00:27:32.852 "copy": false, 00:27:32.852 "nvme_iov_md": false 00:27:32.852 }, 00:27:32.852 "driver_specific": { 00:27:32.852 "ftl": { 00:27:32.852 "base_bdev": "7bf3c798-9ba4-4c78-81ee-65437a88c338", 00:27:32.852 "cache": "nvc0n1p0" 00:27:32.852 } 00:27:32.852 } 00:27:32.852 } 00:27:32.852 ] 00:27:32.852 23:09:00 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:27:32.852 23:09:00 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:27:32.852 23:09:00 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:33.110 23:09:00 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:27:33.110 23:09:00 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:33.368 [2024-12-09 23:09:00.529365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.529442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.368 [2024-12-09 23:09:00.529472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:33.368 [2024-12-09 23:09:00.529491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.529529] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.368 [2024-12-09 23:09:00.533733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.533782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.368 [2024-12-09 23:09:00.533800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:27:33.368 [2024-12-09 23:09:00.533811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.534335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.534357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.368 [2024-12-09 23:09:00.534371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:27:33.368 [2024-12-09 23:09:00.534382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.536921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.537106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.368 [2024-12-09 23:09:00.537139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:27:33.368 [2024-12-09 23:09:00.537150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.542297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.542353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.368 [2024-12-09 23:09:00.542370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.098 ms 00:27:33.368 [2024-12-09 23:09:00.542381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.584597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.584683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.368 [2024-12-09 23:09:00.584729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.144 ms 00:27:33.368 [2024-12-09 23:09:00.584740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.609496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.609578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.368 [2024-12-09 23:09:00.609606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.679 ms 00:27:33.368 [2024-12-09 23:09:00.609618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.609930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.609951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.368 [2024-12-09 23:09:00.609966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:27:33.368 [2024-12-09 23:09:00.609977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.651540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.651889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.368 [2024-12-09 23:09:00.651922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.583 ms 00:27:33.368 [2024-12-09 23:09:00.651934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.368 [2024-12-09 23:09:00.693634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.368 [2024-12-09 23:09:00.693963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.368 [2024-12-09 23:09:00.693999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.672 ms 00:27:33.368 [2024-12-09 23:09:00.694011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.628 [2024-12-09 23:09:00.734851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.628 [2024-12-09 23:09:00.734955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.628 [2024-12-09 23:09:00.734978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.790 ms 00:27:33.628 [2024-12-09 23:09:00.734989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.628 [2024-12-09 23:09:00.778931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.628 [2024-12-09 23:09:00.779017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.628 [2024-12-09 23:09:00.779039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.799 ms 00:27:33.628 [2024-12-09 23:09:00.779050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.628 [2024-12-09 23:09:00.779148] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.628 [2024-12-09 23:09:00.779170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.628 [2024-12-09 23:09:00.779397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.779983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.629 [2024-12-09 23:09:00.780538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.629 [2024-12-09 23:09:00.780552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 79e84b50-2cb6-48b3-ac39-7bdeee73bfb4 00:27:33.629 [2024-12-09 23:09:00.780563] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:33.629 [2024-12-09 23:09:00.780579] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:33.629 [2024-12-09 23:09:00.780594] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:33.629 [2024-12-09 23:09:00.780607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:33.629 [2024-12-09 23:09:00.780617] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.629 [2024-12-09 23:09:00.780630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.629 [2024-12-09 23:09:00.780640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.629 [2024-12-09 23:09:00.780652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.629 [2024-12-09 23:09:00.780660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.629 [2024-12-09 23:09:00.780674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.630 [2024-12-09 23:09:00.780685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.630 [2024-12-09 23:09:00.780699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.532 ms 00:27:33.630 [2024-12-09 23:09:00.780710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.630 [2024-12-09 23:09:00.802689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.630 [2024-12-09 23:09:00.802768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.630 [2024-12-09 23:09:00.802787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.911 ms 00:27:33.630 [2024-12-09 23:09:00.802798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.630 [2024-12-09 23:09:00.803479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.630 [2024-12-09 23:09:00.803494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.630 [2024-12-09 23:09:00.803509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:27:33.630 [2024-12-09 23:09:00.803519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.630 [2024-12-09 23:09:00.876521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.630 [2024-12-09 23:09:00.876600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.630 [2024-12-09 23:09:00.876620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.630 [2024-12-09 23:09:00.876631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.630 [2024-12-09 23:09:00.876733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.630 [2024-12-09 23:09:00.876754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.630 [2024-12-09 23:09:00.876768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.630 [2024-12-09 23:09:00.876779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.630 [2024-12-09 23:09:00.876950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.630 [2024-12-09 23:09:00.876965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.630 [2024-12-09 23:09:00.876980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.630 [2024-12-09 23:09:00.876990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.630 [2024-12-09 23:09:00.877028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.630 [2024-12-09 23:09:00.877039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.630 [2024-12-09 23:09:00.877051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.630 [2024-12-09 23:09:00.877062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.888 [2024-12-09 23:09:01.016062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.888 [2024-12-09 23:09:01.016139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.888 [2024-12-09 23:09:01.016158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.888 [2024-12-09 23:09:01.016170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.888 [2024-12-09 23:09:01.121369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.888 [2024-12-09 23:09:01.121447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.888 [2024-12-09 23:09:01.121494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.888 [2024-12-09 23:09:01.121506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.888 [2024-12-09 23:09:01.121668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.888 [2024-12-09 23:09:01.121686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.888 [2024-12-09 23:09:01.121700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.888 [2024-12-09 23:09:01.121711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.888 [2024-12-09 23:09:01.121797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.888 [2024-12-09 23:09:01.121810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.888 [2024-12-09 23:09:01.121823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.888 [2024-12-09 23:09:01.121834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.888 [2024-12-09 23:09:01.122000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.888 [2024-12-09 23:09:01.122015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.888 [2024-12-09 23:09:01.122032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.888 [2024-12-09 23:09:01.122041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.888 [2024-12-09 23:09:01.122100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.888 [2024-12-09 23:09:01.122113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:33.888 [2024-12-09 23:09:01.122127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.888 [2024-12-09 23:09:01.122137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.888 [2024-12-09 23:09:01.122188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.888 [2024-12-09 23:09:01.122199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.888 [2024-12-09 23:09:01.122212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.889 [2024-12-09 23:09:01.122226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.889 [2024-12-09 23:09:01.122289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.889 [2024-12-09 23:09:01.122301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.889 [2024-12-09 23:09:01.122314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.889 [2024-12-09 23:09:01.122325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.889 [2024-12-09 23:09:01.122528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 594.064 ms, result 0 00:27:33.889 true 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77328 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77328 ']' 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77328 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77328 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.889 killing process with pid 77328 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77328' 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77328 00:27:33.889 23:09:01 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77328 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:27:39.171 23:09:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:39.171 23:09:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:39.171 23:09:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:39.171 23:09:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:27:39.171 23:09:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:39.171 23:09:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:27:39.171 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:27:39.171 fio-3.35 00:27:39.171 Starting 1 thread 00:27:44.525 00:27:44.525 test: (groupid=0, jobs=1): err= 0: pid=77552: Mon Dec 9 23:09:11 2024 00:27:44.525 read: IOPS=957, BW=63.6MiB/s (66.7MB/s)(255MiB/4004msec) 00:27:44.525 slat (usec): min=4, max=123, avg= 7.59, stdev= 4.29 00:27:44.525 clat (usec): min=295, max=1059, avg=471.15, stdev=61.46 00:27:44.525 lat (usec): min=302, max=1065, avg=478.74, stdev=61.84 00:27:44.525 clat percentiles (usec): 00:27:44.525 | 1.00th=[ 338], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 412], 00:27:44.525 | 30.00th=[ 429], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 486], 00:27:44.525 | 70.00th=[ 494], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 570], 00:27:44.525 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 758], 99.95th=[ 840], 00:27:44.525 | 99.99th=[ 1057] 00:27:44.525 write: IOPS=964, BW=64.0MiB/s (67.1MB/s)(256MiB/4000msec); 0 zone resets 00:27:44.525 slat (usec): min=16, max=150, avg=21.32, stdev= 7.33 00:27:44.525 clat (usec): min=351, max=926, avg=529.82, stdev=77.34 00:27:44.525 lat (usec): min=374, max=960, avg=551.14, stdev=77.98 00:27:44.525 clat percentiles (usec): 00:27:44.525 | 1.00th=[ 383], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 461], 00:27:44.525 | 30.00th=[ 494], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 545], 00:27:44.525 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 627], 95.00th=[ 652], 00:27:44.525 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 906], 99.95th=[ 922], 00:27:44.525 | 99.99th=[ 930] 00:27:44.525 bw ( KiB/s): min=64056, max=68136, per=100.00%, avg=65921.71, stdev=1468.20, samples=7 00:27:44.525 iops : min= 942, max= 1002, avg=969.43, stdev=21.59, samples=7 00:27:44.525 lat (usec) : 500=54.52%, 750=44.71%, 1000=0.75% 00:27:44.525 lat (msec) : 2=0.01% 00:27:44.525 cpu : usr=98.45%, sys=0.45%, ctx=6, majf=0, minf=1169 00:27:44.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:44.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.525 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:44.525 00:27:44.525 Run status group 0 (all jobs): 00:27:44.525 READ: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=255MiB (267MB), run=4004-4004msec 00:27:44.526 WRITE: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=256MiB (269MB), run=4000-4000msec 00:27:46.437 ----------------------------------------------------- 00:27:46.437 Suppressions used: 00:27:46.437 count bytes template 00:27:46.437 1 5 /usr/src/fio/parse.c 00:27:46.437 1 8 libtcmalloc_minimal.so 00:27:46.437 1 904 libcrypto.so 00:27:46.437 ----------------------------------------------------- 00:27:46.437 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:46.695 23:09:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:27:46.953 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:27:46.953 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:27:46.953 fio-3.35 00:27:46.953 Starting 2 threads 00:28:19.108 00:28:19.108 first_half: (groupid=0, jobs=1): err= 0: pid=77662: Mon Dec 9 23:09:41 2024 00:28:19.108 read: IOPS=2528, BW=9.88MiB/s (10.4MB/s)(255MiB/25804msec) 00:28:19.108 slat (usec): min=3, max=123, avg= 6.39, stdev= 2.16 00:28:19.108 clat (usec): min=1004, max=297130, avg=36908.09, stdev=20135.62 00:28:19.108 lat (usec): min=1011, max=297136, avg=36914.48, stdev=20135.81 00:28:19.108 clat percentiles (msec): 00:28:19.108 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:28:19.108 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:28:19.108 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 48], 00:28:19.108 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 222], 99.95th=[ 255], 00:28:19.108 | 99.99th=[ 288] 00:28:19.108 write: IOPS=2965, BW=11.6MiB/s (12.1MB/s)(256MiB/22103msec); 0 zone resets 00:28:19.108 slat (usec): min=4, max=536, avg= 8.95, stdev= 8.14 00:28:19.108 clat (usec): min=478, max=138921, avg=13530.09, stdev=22830.29 00:28:19.108 lat (usec): min=484, max=138940, avg=13539.04, stdev=22830.59 00:28:19.108 clat percentiles (usec): 00:28:19.108 | 1.00th=[ 1020], 5.00th=[ 1336], 10.00th=[ 1647], 20.00th=[ 2147], 00:28:19.108 | 30.00th=[ 3851], 40.00th=[ 5800], 50.00th=[ 6849], 60.00th=[ 7439], 00:28:19.108 | 70.00th=[ 8717], 80.00th=[ 11994], 90.00th=[ 33424], 95.00th=[ 81265], 00:28:19.108 | 99.00th=[ 99091], 99.50th=[119014], 99.90th=[129500], 99.95th=[130548], 00:28:19.108 | 99.99th=[135267] 00:28:19.108 bw ( KiB/s): min= 872, max=40400, per=81.85%, avg=19415.41, stdev=9909.49, samples=27 00:28:19.108 iops : min= 218, max=10100, avg=4853.85, stdev=2477.37, samples=27 00:28:19.108 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.38% 00:28:19.108 lat (msec) : 2=8.58%, 4=6.62%, 10=23.46%, 20=7.13%, 50=47.48% 00:28:19.108 lat (msec) : 100=4.67%, 250=1.58%, 500=0.03% 00:28:19.108 cpu : usr=99.21%, sys=0.20%, ctx=39, majf=0, minf=5618 00:28:19.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.8% 00:28:19.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.108 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:19.108 issued rwts: total=65246,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:19.108 second_half: (groupid=0, jobs=1): err= 0: pid=77663: Mon Dec 9 23:09:41 2024 00:28:19.108 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(255MiB/25671msec) 00:28:19.108 slat (nsec): min=3716, max=56051, avg=6468.16, stdev=2257.98 00:28:19.108 clat (usec): min=733, max=301852, avg=37606.08, stdev=18708.37 00:28:19.108 lat (usec): min=743, max=301860, avg=37612.55, stdev=18708.55 00:28:19.108 clat percentiles (msec): 00:28:19.108 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:28:19.108 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:28:19.108 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 55], 00:28:19.108 | 99.00th=[ 136], 99.50th=[ 165], 99.90th=[ 199], 99.95th=[ 207], 00:28:19.108 | 99.99th=[ 296] 00:28:19.108 write: IOPS=3262, BW=12.7MiB/s (13.4MB/s)(256MiB/20085msec); 0 zone resets 00:28:19.108 slat (usec): min=4, max=451, avg= 8.94, stdev= 5.30 00:28:19.108 clat (usec): min=391, max=139821, avg=12708.68, stdev=22836.58 00:28:19.108 lat (usec): min=400, max=139829, avg=12717.62, stdev=22836.82 00:28:19.108 clat percentiles (usec): 00:28:19.108 | 1.00th=[ 1090], 5.00th=[ 1434], 10.00th=[ 1647], 20.00th=[ 1909], 00:28:19.108 | 30.00th=[ 2245], 40.00th=[ 3949], 50.00th=[ 5473], 60.00th=[ 6652], 00:28:19.108 | 70.00th=[ 8225], 80.00th=[ 12256], 90.00th=[ 23200], 95.00th=[ 81265], 00:28:19.108 | 99.00th=[ 98042], 99.50th=[114820], 99.90th=[131597], 99.95th=[135267], 00:28:19.108 | 99.99th=[137364] 00:28:19.108 bw ( KiB/s): min= 1024, max=48048, per=92.09%, avg=21845.33, stdev=11066.91, samples=24 00:28:19.108 iops : min= 256, max=12012, avg=5461.33, stdev=2766.73, samples=24 00:28:19.108 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.23% 00:28:19.108 lat (msec) : 2=11.69%, 4=8.53%, 10=17.01%, 20=8.44%, 50=47.12% 00:28:19.108 lat (msec) : 100=5.20%, 250=1.71%, 500=0.01% 00:28:19.108 cpu : usr=99.18%, sys=0.20%, ctx=51, majf=0, minf=5520 00:28:19.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:19.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.108 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:19.108 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:19.108 00:28:19.108 Run status group 0 (all jobs): 00:28:19.108 READ: bw=19.7MiB/s (20.7MB/s), 9.88MiB/s-9.92MiB/s (10.4MB/s-10.4MB/s), io=509MiB (534MB), run=25671-25804msec 00:28:19.108 WRITE: bw=23.2MiB/s (24.3MB/s), 11.6MiB/s-12.7MiB/s (12.1MB/s-13.4MB/s), io=512MiB (537MB), run=20085-22103msec 00:28:19.108 ----------------------------------------------------- 00:28:19.108 Suppressions used: 00:28:19.108 count bytes template 00:28:19.108 2 10 /usr/src/fio/parse.c 00:28:19.108 3 288 /usr/src/fio/iolog.c 00:28:19.108 1 8 libtcmalloc_minimal.so 00:28:19.108 1 904 libcrypto.so 00:28:19.108 ----------------------------------------------------- 00:28:19.108 00:28:19.108 23:09:43 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:28:19.108 23:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.108 23:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:19.108 23:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:28:19.108 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:28:19.108 fio-3.35 00:28:19.108 Starting 1 thread 00:28:33.988 00:28:33.988 test: (groupid=0, jobs=1): err= 0: pid=78000: Mon Dec 9 23:09:59 2024 00:28:33.988 read: IOPS=7450, BW=29.1MiB/s (30.5MB/s)(255MiB/8751msec) 00:28:33.988 slat (nsec): min=3551, max=57641, avg=5655.03, stdev=2130.48 00:28:33.988 clat (usec): min=765, max=33746, avg=17169.21, stdev=1520.51 00:28:33.988 lat (usec): min=769, max=33751, avg=17174.86, stdev=1520.56 00:28:33.988 clat percentiles (usec): 00:28:33.988 | 1.00th=[15795], 5.00th=[16188], 10.00th=[16319], 20.00th=[16450], 00:28:33.988 | 30.00th=[16581], 40.00th=[16909], 50.00th=[16909], 60.00th=[17171], 00:28:33.988 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[18482], 00:28:33.988 | 99.00th=[25560], 99.50th=[27919], 99.90th=[31065], 99.95th=[32375], 00:28:33.988 | 99.99th=[33162] 00:28:33.988 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(256MiB/5343msec); 0 zone resets 00:28:33.988 slat (usec): min=4, max=1006, avg= 8.59, stdev= 9.75 00:28:33.988 clat (usec): min=657, max=63817, avg=10384.71, stdev=12806.46 00:28:33.988 lat (usec): min=665, max=63824, avg=10393.30, stdev=12806.46 00:28:33.988 clat percentiles (usec): 00:28:33.988 | 1.00th=[ 1029], 5.00th=[ 1254], 10.00th=[ 1401], 20.00th=[ 1631], 00:28:33.988 | 30.00th=[ 1827], 40.00th=[ 2212], 50.00th=[ 6718], 60.00th=[ 7963], 00:28:33.988 | 70.00th=[ 8979], 80.00th=[10945], 90.00th=[37487], 95.00th=[39584], 00:28:33.988 | 99.00th=[43779], 99.50th=[46924], 99.90th=[54264], 99.95th=[56361], 00:28:33.988 | 99.99th=[58459] 00:28:33.988 bw ( KiB/s): min=26824, max=64632, per=97.14%, avg=47662.55, stdev=10865.73, samples=11 00:28:33.988 iops : min= 6706, max=16158, avg=11915.64, stdev=2716.43, samples=11 00:28:33.988 lat (usec) : 750=0.02%, 1000=0.37% 00:28:33.988 lat (msec) : 2=17.90%, 4=2.76%, 10=16.96%, 20=52.15%, 50=9.73% 00:28:33.988 lat (msec) : 100=0.11% 00:28:33.988 cpu : usr=98.77%, sys=0.45%, ctx=23, majf=0, minf=5565 00:28:33.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:33.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.988 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:33.988 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:33.988 00:28:33.988 Run status group 0 (all jobs): 00:28:33.988 READ: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=255MiB (267MB), run=8751-8751msec 00:28:33.988 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=256MiB (268MB), run=5343-5343msec 00:28:34.921 ----------------------------------------------------- 00:28:34.921 Suppressions used: 00:28:34.921 count bytes template 00:28:34.921 1 5 /usr/src/fio/parse.c 00:28:34.921 2 192 /usr/src/fio/iolog.c 00:28:34.921 1 8 libtcmalloc_minimal.so 00:28:34.921 1 904 libcrypto.so 00:28:34.921 ----------------------------------------------------- 00:28:34.921 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:28:34.921 Remove shared memory files 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57998 /dev/shm/spdk_tgt_trace.pid76220 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:28:34.921 ************************************ 00:28:34.921 END TEST ftl_fio_basic 00:28:34.921 ************************************ 00:28:34.921 00:28:34.921 real 1m11.980s 00:28:34.921 user 2m35.604s 00:28:34.921 sys 0m4.538s 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:34.921 23:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:28:34.921 23:10:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:34.921 23:10:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:34.921 23:10:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:34.921 23:10:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:34.921 ************************************ 00:28:34.921 START TEST ftl_bdevperf 00:28:34.921 ************************************ 00:28:34.921 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:28:35.181 * Looking for test storage... 00:28:35.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:35.181 23:10:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:35.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.182 --rc genhtml_branch_coverage=1 00:28:35.182 --rc genhtml_function_coverage=1 00:28:35.182 --rc genhtml_legend=1 00:28:35.182 --rc geninfo_all_blocks=1 00:28:35.182 --rc geninfo_unexecuted_blocks=1 00:28:35.182 00:28:35.182 ' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:35.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.182 --rc genhtml_branch_coverage=1 00:28:35.182 --rc genhtml_function_coverage=1 00:28:35.182 --rc genhtml_legend=1 00:28:35.182 --rc geninfo_all_blocks=1 00:28:35.182 --rc geninfo_unexecuted_blocks=1 00:28:35.182 00:28:35.182 ' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:35.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.182 --rc genhtml_branch_coverage=1 00:28:35.182 --rc genhtml_function_coverage=1 00:28:35.182 --rc genhtml_legend=1 00:28:35.182 --rc geninfo_all_blocks=1 00:28:35.182 --rc geninfo_unexecuted_blocks=1 00:28:35.182 00:28:35.182 ' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:35.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.182 --rc genhtml_branch_coverage=1 00:28:35.182 --rc genhtml_function_coverage=1 00:28:35.182 --rc genhtml_legend=1 00:28:35.182 --rc geninfo_all_blocks=1 00:28:35.182 --rc geninfo_unexecuted_blocks=1 00:28:35.182 00:28:35.182 ' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:35.182 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:28:35.439 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78244 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78244 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78244 ']' 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:35.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:35.440 23:10:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:35.440 [2024-12-09 23:10:02.622186] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:28:35.440 [2024-12-09 23:10:02.622330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78244 ] 00:28:35.698 [2024-12-09 23:10:02.809216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.698 [2024-12-09 23:10:02.949411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:28:36.266 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:28:36.525 23:10:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:36.784 { 00:28:36.784 "name": "nvme0n1", 00:28:36.784 "aliases": [ 00:28:36.784 "40637094-a869-4935-82ce-4b6c4cbbc864" 00:28:36.784 ], 00:28:36.784 "product_name": "NVMe disk", 00:28:36.784 "block_size": 4096, 00:28:36.784 "num_blocks": 1310720, 00:28:36.784 "uuid": "40637094-a869-4935-82ce-4b6c4cbbc864", 00:28:36.784 "numa_id": -1, 00:28:36.784 "assigned_rate_limits": { 00:28:36.784 "rw_ios_per_sec": 0, 00:28:36.784 "rw_mbytes_per_sec": 0, 00:28:36.784 "r_mbytes_per_sec": 0, 00:28:36.784 "w_mbytes_per_sec": 0 00:28:36.784 }, 00:28:36.784 "claimed": true, 00:28:36.784 "claim_type": "read_many_write_one", 00:28:36.784 "zoned": false, 00:28:36.784 "supported_io_types": { 00:28:36.784 "read": true, 00:28:36.784 "write": true, 00:28:36.784 "unmap": true, 00:28:36.784 "flush": true, 00:28:36.784 "reset": true, 00:28:36.784 "nvme_admin": true, 00:28:36.784 "nvme_io": true, 00:28:36.784 "nvme_io_md": false, 00:28:36.784 "write_zeroes": true, 00:28:36.784 "zcopy": false, 00:28:36.784 "get_zone_info": false, 00:28:36.784 "zone_management": false, 00:28:36.784 "zone_append": false, 00:28:36.784 "compare": true, 00:28:36.784 "compare_and_write": false, 00:28:36.784 "abort": true, 00:28:36.784 "seek_hole": false, 00:28:36.784 "seek_data": false, 00:28:36.784 "copy": true, 00:28:36.784 "nvme_iov_md": false 00:28:36.784 }, 00:28:36.784 "driver_specific": { 00:28:36.784 "nvme": [ 00:28:36.784 { 00:28:36.784 "pci_address": "0000:00:11.0", 00:28:36.784 "trid": { 00:28:36.784 "trtype": "PCIe", 00:28:36.784 "traddr": "0000:00:11.0" 00:28:36.784 }, 00:28:36.784 "ctrlr_data": { 00:28:36.784 "cntlid": 0, 00:28:36.784 "vendor_id": "0x1b36", 00:28:36.784 "model_number": "QEMU NVMe Ctrl", 00:28:36.784 "serial_number": "12341", 00:28:36.784 "firmware_revision": "8.0.0", 00:28:36.784 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:36.784 "oacs": { 00:28:36.784 "security": 0, 00:28:36.784 "format": 1, 00:28:36.784 "firmware": 0, 00:28:36.784 "ns_manage": 1 00:28:36.784 }, 00:28:36.784 "multi_ctrlr": false, 00:28:36.784 "ana_reporting": false 00:28:36.784 }, 00:28:36.784 "vs": { 00:28:36.784 "nvme_version": "1.4" 00:28:36.784 }, 00:28:36.784 "ns_data": { 00:28:36.784 "id": 1, 00:28:36.784 "can_share": false 00:28:36.784 } 00:28:36.784 } 00:28:36.784 ], 00:28:36.784 "mp_policy": "active_passive" 00:28:36.784 } 00:28:36.784 } 00:28:36.784 ]' 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:36.784 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:37.042 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=26247e5b-f9a4-4077-b604-73b5a0ccf10d 00:28:37.042 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:28:37.042 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26247e5b-f9a4-4077-b604-73b5a0ccf10d 00:28:37.302 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:37.563 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ae0437df-97c0-43bf-90c9-b96d68506dbe 00:28:37.563 23:10:04 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ae0437df-97c0-43bf-90c9-b96d68506dbe 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:28:37.832 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:38.091 { 00:28:38.091 "name": "de77ea4f-0aaf-42a8-b3f6-0aa84059ce13", 00:28:38.091 "aliases": [ 00:28:38.091 "lvs/nvme0n1p0" 00:28:38.091 ], 00:28:38.091 "product_name": "Logical Volume", 00:28:38.091 "block_size": 4096, 00:28:38.091 "num_blocks": 26476544, 00:28:38.091 "uuid": "de77ea4f-0aaf-42a8-b3f6-0aa84059ce13", 00:28:38.091 "assigned_rate_limits": { 00:28:38.091 "rw_ios_per_sec": 0, 00:28:38.091 "rw_mbytes_per_sec": 0, 00:28:38.091 "r_mbytes_per_sec": 0, 00:28:38.091 "w_mbytes_per_sec": 0 00:28:38.091 }, 00:28:38.091 "claimed": false, 00:28:38.091 "zoned": false, 00:28:38.091 "supported_io_types": { 00:28:38.091 "read": true, 00:28:38.091 "write": true, 00:28:38.091 "unmap": true, 00:28:38.091 "flush": false, 00:28:38.091 "reset": true, 00:28:38.091 "nvme_admin": false, 00:28:38.091 "nvme_io": false, 00:28:38.091 "nvme_io_md": false, 00:28:38.091 "write_zeroes": true, 00:28:38.091 "zcopy": false, 00:28:38.091 "get_zone_info": false, 00:28:38.091 "zone_management": false, 00:28:38.091 "zone_append": false, 00:28:38.091 "compare": false, 00:28:38.091 "compare_and_write": false, 00:28:38.091 "abort": false, 00:28:38.091 "seek_hole": true, 00:28:38.091 "seek_data": true, 00:28:38.091 "copy": false, 00:28:38.091 "nvme_iov_md": false 00:28:38.091 }, 00:28:38.091 "driver_specific": { 00:28:38.091 "lvol": { 00:28:38.091 "lvol_store_uuid": "ae0437df-97c0-43bf-90c9-b96d68506dbe", 00:28:38.091 "base_bdev": "nvme0n1", 00:28:38.091 "thin_provision": true, 00:28:38.091 "num_allocated_clusters": 0, 00:28:38.091 "snapshot": false, 00:28:38.091 "clone": false, 00:28:38.091 "esnap_clone": false 00:28:38.091 } 00:28:38.091 } 00:28:38.091 } 00:28:38.091 ]' 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:28:38.091 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:38.658 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:38.659 { 00:28:38.659 "name": "de77ea4f-0aaf-42a8-b3f6-0aa84059ce13", 00:28:38.659 "aliases": [ 00:28:38.659 "lvs/nvme0n1p0" 00:28:38.659 ], 00:28:38.659 "product_name": "Logical Volume", 00:28:38.659 "block_size": 4096, 00:28:38.659 "num_blocks": 26476544, 00:28:38.659 "uuid": "de77ea4f-0aaf-42a8-b3f6-0aa84059ce13", 00:28:38.659 "assigned_rate_limits": { 00:28:38.659 "rw_ios_per_sec": 0, 00:28:38.659 "rw_mbytes_per_sec": 0, 00:28:38.659 "r_mbytes_per_sec": 0, 00:28:38.659 "w_mbytes_per_sec": 0 00:28:38.659 }, 00:28:38.659 "claimed": false, 00:28:38.659 "zoned": false, 00:28:38.659 "supported_io_types": { 00:28:38.659 "read": true, 00:28:38.659 "write": true, 00:28:38.659 "unmap": true, 00:28:38.659 "flush": false, 00:28:38.659 "reset": true, 00:28:38.659 "nvme_admin": false, 00:28:38.659 "nvme_io": false, 00:28:38.659 "nvme_io_md": false, 00:28:38.659 "write_zeroes": true, 00:28:38.659 "zcopy": false, 00:28:38.659 "get_zone_info": false, 00:28:38.659 "zone_management": false, 00:28:38.659 "zone_append": false, 00:28:38.659 "compare": false, 00:28:38.659 "compare_and_write": false, 00:28:38.659 "abort": false, 00:28:38.659 "seek_hole": true, 00:28:38.659 "seek_data": true, 00:28:38.659 "copy": false, 00:28:38.659 "nvme_iov_md": false 00:28:38.659 }, 00:28:38.659 "driver_specific": { 00:28:38.659 "lvol": { 00:28:38.659 "lvol_store_uuid": "ae0437df-97c0-43bf-90c9-b96d68506dbe", 00:28:38.659 "base_bdev": "nvme0n1", 00:28:38.659 "thin_provision": true, 00:28:38.659 "num_allocated_clusters": 0, 00:28:38.659 "snapshot": false, 00:28:38.659 "clone": false, 00:28:38.659 "esnap_clone": false 00:28:38.659 } 00:28:38.659 } 00:28:38.659 } 00:28:38.659 ]' 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:28:38.659 23:10:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:28:38.918 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 00:28:39.182 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:39.182 { 00:28:39.182 "name": "de77ea4f-0aaf-42a8-b3f6-0aa84059ce13", 00:28:39.182 "aliases": [ 00:28:39.182 "lvs/nvme0n1p0" 00:28:39.182 ], 00:28:39.182 "product_name": "Logical Volume", 00:28:39.182 "block_size": 4096, 00:28:39.182 "num_blocks": 26476544, 00:28:39.182 "uuid": "de77ea4f-0aaf-42a8-b3f6-0aa84059ce13", 00:28:39.182 "assigned_rate_limits": { 00:28:39.182 "rw_ios_per_sec": 0, 00:28:39.182 "rw_mbytes_per_sec": 0, 00:28:39.182 "r_mbytes_per_sec": 0, 00:28:39.182 "w_mbytes_per_sec": 0 00:28:39.182 }, 00:28:39.182 "claimed": false, 00:28:39.182 "zoned": false, 00:28:39.182 "supported_io_types": { 00:28:39.182 "read": true, 00:28:39.182 "write": true, 00:28:39.182 "unmap": true, 00:28:39.182 "flush": false, 00:28:39.182 "reset": true, 00:28:39.182 "nvme_admin": false, 00:28:39.182 "nvme_io": false, 00:28:39.182 "nvme_io_md": false, 00:28:39.182 "write_zeroes": true, 00:28:39.182 "zcopy": false, 00:28:39.182 "get_zone_info": false, 00:28:39.182 "zone_management": false, 00:28:39.182 "zone_append": false, 00:28:39.182 "compare": false, 00:28:39.182 "compare_and_write": false, 00:28:39.182 "abort": false, 00:28:39.182 "seek_hole": true, 00:28:39.182 "seek_data": true, 00:28:39.182 "copy": false, 00:28:39.182 "nvme_iov_md": false 00:28:39.182 }, 00:28:39.182 "driver_specific": { 00:28:39.182 "lvol": { 00:28:39.182 "lvol_store_uuid": "ae0437df-97c0-43bf-90c9-b96d68506dbe", 00:28:39.182 "base_bdev": "nvme0n1", 00:28:39.182 "thin_provision": true, 00:28:39.182 "num_allocated_clusters": 0, 00:28:39.182 "snapshot": false, 00:28:39.182 "clone": false, 00:28:39.182 "esnap_clone": false 00:28:39.182 } 00:28:39.182 } 00:28:39.182 } 00:28:39.182 ]' 00:28:39.182 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:39.182 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:28:39.182 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:39.443 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:39.443 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:39.443 23:10:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:28:39.443 23:10:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:28:39.444 23:10:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d de77ea4f-0aaf-42a8-b3f6-0aa84059ce13 -c nvc0n1p0 --l2p_dram_limit 20 00:28:39.444 [2024-12-09 23:10:06.755126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.755213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:39.444 [2024-12-09 23:10:06.755231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:39.444 [2024-12-09 23:10:06.755245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.755320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.755336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:39.444 [2024-12-09 23:10:06.755347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:39.444 [2024-12-09 23:10:06.755360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.755381] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:39.444 [2024-12-09 23:10:06.756425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:39.444 [2024-12-09 23:10:06.756467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.756481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:39.444 [2024-12-09 23:10:06.756493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.093 ms 00:28:39.444 [2024-12-09 23:10:06.756507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.756592] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8faee0eb-aa25-4084-862f-1e016fe6ecac 00:28:39.444 [2024-12-09 23:10:06.759081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.759122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:39.444 [2024-12-09 23:10:06.759143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:39.444 [2024-12-09 23:10:06.759154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.768799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.768844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:39.444 [2024-12-09 23:10:06.768861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.572 ms 00:28:39.444 [2024-12-09 23:10:06.768876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.768994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.769011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:39.444 [2024-12-09 23:10:06.769030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:39.444 [2024-12-09 23:10:06.769042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.769111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.769123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:39.444 [2024-12-09 23:10:06.769137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:39.444 [2024-12-09 23:10:06.769148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.769189] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:39.444 [2024-12-09 23:10:06.775012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.775064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:39.444 [2024-12-09 23:10:06.775078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.846 ms 00:28:39.444 [2024-12-09 23:10:06.775095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.775142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.775158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:39.444 [2024-12-09 23:10:06.775169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:39.444 [2024-12-09 23:10:06.775182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.775229] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:39.444 [2024-12-09 23:10:06.775375] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:39.444 [2024-12-09 23:10:06.775390] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:39.444 [2024-12-09 23:10:06.775408] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:39.444 [2024-12-09 23:10:06.775423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:39.444 [2024-12-09 23:10:06.775438] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:39.444 [2024-12-09 23:10:06.775467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:39.444 [2024-12-09 23:10:06.775483] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:39.444 [2024-12-09 23:10:06.775494] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:39.444 [2024-12-09 23:10:06.775507] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:39.444 [2024-12-09 23:10:06.775522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.775535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:39.444 [2024-12-09 23:10:06.775546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:28:39.444 [2024-12-09 23:10:06.775559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.775634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.444 [2024-12-09 23:10:06.775648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:39.444 [2024-12-09 23:10:06.775659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:39.444 [2024-12-09 23:10:06.775674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.444 [2024-12-09 23:10:06.775756] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:39.444 [2024-12-09 23:10:06.775774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:39.444 [2024-12-09 23:10:06.775785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.444 [2024-12-09 23:10:06.775798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.444 [2024-12-09 23:10:06.775809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:39.444 [2024-12-09 23:10:06.775821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:39.444 [2024-12-09 23:10:06.775830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:39.444 [2024-12-09 23:10:06.775842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:39.444 [2024-12-09 23:10:06.775852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:39.444 [2024-12-09 23:10:06.775864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.444 [2024-12-09 23:10:06.775873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:39.444 [2024-12-09 23:10:06.775900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:39.444 [2024-12-09 23:10:06.775912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.444 [2024-12-09 23:10:06.775925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:39.444 [2024-12-09 23:10:06.775934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:39.444 [2024-12-09 23:10:06.775949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.444 [2024-12-09 23:10:06.775958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:39.444 [2024-12-09 23:10:06.775970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:39.444 [2024-12-09 23:10:06.775980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.444 [2024-12-09 23:10:06.775992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:39.444 [2024-12-09 23:10:06.776018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:39.444 [2024-12-09 23:10:06.776030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.444 [2024-12-09 23:10:06.776040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:39.444 [2024-12-09 23:10:06.776052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:39.444 [2024-12-09 23:10:06.776062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.444 [2024-12-09 23:10:06.776074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:39.445 [2024-12-09 23:10:06.776084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:39.445 [2024-12-09 23:10:06.776096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.445 [2024-12-09 23:10:06.776106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:39.445 [2024-12-09 23:10:06.776120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:39.445 [2024-12-09 23:10:06.776131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.445 [2024-12-09 23:10:06.776146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:39.445 [2024-12-09 23:10:06.776156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:39.445 [2024-12-09 23:10:06.776168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.445 [2024-12-09 23:10:06.776178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:39.445 [2024-12-09 23:10:06.776192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:39.445 [2024-12-09 23:10:06.776203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.445 [2024-12-09 23:10:06.776215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:39.445 [2024-12-09 23:10:06.776225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:39.445 [2024-12-09 23:10:06.776238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.445 [2024-12-09 23:10:06.776247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:39.445 [2024-12-09 23:10:06.776260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:39.445 [2024-12-09 23:10:06.776270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.445 [2024-12-09 23:10:06.776281] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:39.445 [2024-12-09 23:10:06.776292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:39.445 [2024-12-09 23:10:06.776305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.445 [2024-12-09 23:10:06.776316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.445 [2024-12-09 23:10:06.776345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:39.445 [2024-12-09 23:10:06.776355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:39.445 [2024-12-09 23:10:06.776368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:39.445 [2024-12-09 23:10:06.776378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:39.445 [2024-12-09 23:10:06.776389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:39.445 [2024-12-09 23:10:06.776399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:39.445 [2024-12-09 23:10:06.776413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:39.445 [2024-12-09 23:10:06.776425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.445 [2024-12-09 23:10:06.776439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:39.445 [2024-12-09 23:10:06.776450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:39.445 [2024-12-09 23:10:06.776463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:39.445 [2024-12-09 23:10:06.776473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:39.445 [2024-12-09 23:10:06.776497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:39.445 [2024-12-09 23:10:06.776508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:39.445 [2024-12-09 23:10:06.776521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:39.445 [2024-12-09 23:10:06.776532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:39.445 [2024-12-09 23:10:06.776550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:39.445 [2024-12-09 23:10:06.776561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:39.445 [2024-12-09 23:10:06.776574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:39.445 [2024-12-09 23:10:06.776584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:39.445 [2024-12-09 23:10:06.776597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:39.445 [2024-12-09 23:10:06.776608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:39.445 [2024-12-09 23:10:06.776620] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:39.445 [2024-12-09 23:10:06.776632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.445 [2024-12-09 23:10:06.776649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:39.445 [2024-12-09 23:10:06.776661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:39.445 [2024-12-09 23:10:06.776674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:39.445 [2024-12-09 23:10:06.776685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:39.445 [2024-12-09 23:10:06.776698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.445 [2024-12-09 23:10:06.776710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:39.445 [2024-12-09 23:10:06.776723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:28:39.445 [2024-12-09 23:10:06.776734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.445 [2024-12-09 23:10:06.776780] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:39.445 [2024-12-09 23:10:06.776792] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:43.630 [2024-12-09 23:10:10.153432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.630 [2024-12-09 23:10:10.153549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:43.630 [2024-12-09 23:10:10.153573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3382.125 ms 00:28:43.630 [2024-12-09 23:10:10.153584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.630 [2024-12-09 23:10:10.200419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.200768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:43.631 [2024-12-09 23:10:10.200947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.540 ms 00:28:43.631 [2024-12-09 23:10:10.200987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.201185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.201225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:43.631 [2024-12-09 23:10:10.201306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:43.631 [2024-12-09 23:10:10.201337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.262641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.262954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:43.631 [2024-12-09 23:10:10.262991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.334 ms 00:28:43.631 [2024-12-09 23:10:10.263003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.263071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.263084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:43.631 [2024-12-09 23:10:10.263098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:43.631 [2024-12-09 23:10:10.263112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.263670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.263689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:43.631 [2024-12-09 23:10:10.263703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:28:43.631 [2024-12-09 23:10:10.263713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.263844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.263858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:43.631 [2024-12-09 23:10:10.263875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:43.631 [2024-12-09 23:10:10.263886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.286166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.286236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:43.631 [2024-12-09 23:10:10.286256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.288 ms 00:28:43.631 [2024-12-09 23:10:10.286283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.300359] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:28:43.631 [2024-12-09 23:10:10.306984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.307057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:43.631 [2024-12-09 23:10:10.307074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.607 ms 00:28:43.631 [2024-12-09 23:10:10.307088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.399410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.399511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:43.631 [2024-12-09 23:10:10.399531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.409 ms 00:28:43.631 [2024-12-09 23:10:10.399545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.399780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.399802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:43.631 [2024-12-09 23:10:10.399815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:28:43.631 [2024-12-09 23:10:10.399834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.439230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.439319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:43.631 [2024-12-09 23:10:10.439352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.374 ms 00:28:43.631 [2024-12-09 23:10:10.439367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.477908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.478220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:43.631 [2024-12-09 23:10:10.478249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.521 ms 00:28:43.631 [2024-12-09 23:10:10.478264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.479144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.479171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:43.631 [2024-12-09 23:10:10.479185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:28:43.631 [2024-12-09 23:10:10.479198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.585834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.585921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:43.631 [2024-12-09 23:10:10.585939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.712 ms 00:28:43.631 [2024-12-09 23:10:10.585954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.626902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.626987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:43.631 [2024-12-09 23:10:10.627010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.869 ms 00:28:43.631 [2024-12-09 23:10:10.627024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.666332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.666422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:43.631 [2024-12-09 23:10:10.666439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.295 ms 00:28:43.631 [2024-12-09 23:10:10.666475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.706301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.706390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:43.631 [2024-12-09 23:10:10.706408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.810 ms 00:28:43.631 [2024-12-09 23:10:10.706422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.706527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.706549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:43.631 [2024-12-09 23:10:10.706562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:43.631 [2024-12-09 23:10:10.706576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.706714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.631 [2024-12-09 23:10:10.706731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:43.631 [2024-12-09 23:10:10.706742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:43.631 [2024-12-09 23:10:10.706755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.631 [2024-12-09 23:10:10.708215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3959.017 ms, result 0 00:28:43.631 { 00:28:43.631 "name": "ftl0", 00:28:43.631 "uuid": "8faee0eb-aa25-4084-862f-1e016fe6ecac" 00:28:43.631 } 00:28:43.631 23:10:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:28:43.631 23:10:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:28:43.631 23:10:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:28:43.631 23:10:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:28:43.892 [2024-12-09 23:10:11.064238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:43.892 I/O size of 69632 is greater than zero copy threshold (65536). 00:28:43.892 Zero copy mechanism will not be used. 00:28:43.892 Running I/O for 4 seconds... 00:28:45.760 1586.00 IOPS, 105.32 MiB/s [2024-12-09T23:10:14.471Z] 1599.50 IOPS, 106.22 MiB/s [2024-12-09T23:10:15.404Z] 1616.00 IOPS, 107.31 MiB/s 00:28:48.068 Latency(us) 00:28:48.068 [2024-12-09T23:10:15.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.068 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:28:48.068 ftl0 : 4.00 1628.68 108.15 0.00 0.00 639.89 225.36 2224.01 00:28:48.068 [2024-12-09T23:10:15.404Z] =================================================================================================================== 00:28:48.068 [2024-12-09T23:10:15.404Z] Total : 1628.68 108.15 0.00 0.00 639.89 225.36 2224.01 00:28:48.068 [2024-12-09 23:10:15.068156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:48.068 { 00:28:48.068 "results": [ 00:28:48.068 { 00:28:48.068 "job": "ftl0", 00:28:48.068 "core_mask": "0x1", 00:28:48.068 "workload": "randwrite", 00:28:48.068 "status": "finished", 00:28:48.068 "queue_depth": 1, 00:28:48.068 "io_size": 69632, 00:28:48.068 "runtime": 4.000179, 00:28:48.068 "iops": 1628.6771166990277, 00:28:48.068 "mibps": 108.15433978079481, 00:28:48.068 "io_failed": 0, 00:28:48.068 "io_timeout": 0, 00:28:48.068 "avg_latency_us": 639.8942851066583, 00:28:48.068 "min_latency_us": 225.36224899598395, 00:28:48.068 "max_latency_us": 2224.0128514056223 00:28:48.068 } 00:28:48.068 ], 00:28:48.068 "core_count": 1 00:28:48.068 } 00:28:48.068 23:10:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:28:48.068 [2024-12-09 23:10:15.189806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:48.068 Running I/O for 4 seconds... 00:28:49.940 9310.00 IOPS, 36.37 MiB/s [2024-12-09T23:10:18.214Z] 9226.50 IOPS, 36.04 MiB/s [2024-12-09T23:10:19.632Z] 9139.33 IOPS, 35.70 MiB/s [2024-12-09T23:10:19.632Z] 9185.75 IOPS, 35.88 MiB/s 00:28:52.296 Latency(us) 00:28:52.296 [2024-12-09T23:10:19.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.296 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:28:52.296 ftl0 : 4.02 9178.84 35.85 0.00 0.00 13914.15 264.84 38532.01 00:28:52.296 [2024-12-09T23:10:19.632Z] =================================================================================================================== 00:28:52.296 [2024-12-09T23:10:19.632Z] Total : 9178.84 35.85 0.00 0.00 13914.15 0.00 38532.01 00:28:52.296 { 00:28:52.296 "results": [ 00:28:52.296 { 00:28:52.296 "job": "ftl0", 00:28:52.296 "core_mask": "0x1", 00:28:52.296 "workload": "randwrite", 00:28:52.296 "status": "finished", 00:28:52.296 "queue_depth": 128, 00:28:52.296 "io_size": 4096, 00:28:52.296 "runtime": 4.016628, 00:28:52.296 "iops": 9178.843547373568, 00:28:52.296 "mibps": 35.854857606928, 00:28:52.296 "io_failed": 0, 00:28:52.296 "io_timeout": 0, 00:28:52.296 "avg_latency_us": 13914.149968213966, 00:28:52.296 "min_latency_us": 264.8417670682731, 00:28:52.296 "max_latency_us": 38532.00963855422 00:28:52.296 } 00:28:52.296 ], 00:28:52.296 "core_count": 1 00:28:52.296 } 00:28:52.296 [2024-12-09 23:10:19.212730] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:52.296 23:10:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:28:52.296 [2024-12-09 23:10:19.346061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:28:52.296 Running I/O for 4 seconds... 00:28:54.166 7630.00 IOPS, 29.80 MiB/s [2024-12-09T23:10:22.436Z] 7630.00 IOPS, 29.80 MiB/s [2024-12-09T23:10:23.381Z] 7695.00 IOPS, 30.06 MiB/s [2024-12-09T23:10:23.381Z] 7610.75 IOPS, 29.73 MiB/s 00:28:56.045 Latency(us) 00:28:56.045 [2024-12-09T23:10:23.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.045 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:56.045 Verification LBA range: start 0x0 length 0x1400000 00:28:56.045 ftl0 : 4.01 7622.34 29.77 0.00 0.00 16741.33 292.81 35163.09 00:28:56.045 [2024-12-09T23:10:23.381Z] =================================================================================================================== 00:28:56.045 [2024-12-09T23:10:23.381Z] Total : 7622.34 29.77 0.00 0.00 16741.33 0.00 35163.09 00:28:56.045 [2024-12-09 23:10:23.370123] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:28:56.045 { 00:28:56.045 "results": [ 00:28:56.045 { 00:28:56.045 "job": "ftl0", 00:28:56.045 "core_mask": "0x1", 00:28:56.045 "workload": "verify", 00:28:56.045 "status": "finished", 00:28:56.045 "verify_range": { 00:28:56.045 "start": 0, 00:28:56.045 "length": 20971520 00:28:56.045 }, 00:28:56.045 "queue_depth": 128, 00:28:56.045 "io_size": 4096, 00:28:56.045 "runtime": 4.010449, 00:28:56.045 "iops": 7622.338546132864, 00:28:56.045 "mibps": 29.7747599458315, 00:28:56.045 "io_failed": 0, 00:28:56.045 "io_timeout": 0, 00:28:56.045 "avg_latency_us": 16741.32758679719, 00:28:56.045 "min_latency_us": 292.8064257028112, 00:28:56.045 "max_latency_us": 35163.09076305221 00:28:56.045 } 00:28:56.045 ], 00:28:56.045 "core_count": 1 00:28:56.045 } 00:28:56.327 23:10:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:28:56.327 [2024-12-09 23:10:23.629846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.327 [2024-12-09 23:10:23.630171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:56.327 [2024-12-09 23:10:23.630201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:56.327 [2024-12-09 23:10:23.630216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.327 [2024-12-09 23:10:23.630260] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:56.327 [2024-12-09 23:10:23.634771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.327 [2024-12-09 23:10:23.634817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:56.327 [2024-12-09 23:10:23.634836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.493 ms 00:28:56.327 [2024-12-09 23:10:23.634847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.327 [2024-12-09 23:10:23.636914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.327 [2024-12-09 23:10:23.637087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:56.327 [2024-12-09 23:10:23.637126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:28:56.327 [2024-12-09 23:10:23.637138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.589 [2024-12-09 23:10:23.862507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.589 [2024-12-09 23:10:23.862834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:56.589 [2024-12-09 23:10:23.862880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 225.681 ms 00:28:56.589 [2024-12-09 23:10:23.862893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.589 [2024-12-09 23:10:23.868102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.589 [2024-12-09 23:10:23.868150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:56.589 [2024-12-09 23:10:23.868167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.155 ms 00:28:56.589 [2024-12-09 23:10:23.868182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.589 [2024-12-09 23:10:23.907616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.589 [2024-12-09 23:10:23.907711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:56.589 [2024-12-09 23:10:23.907734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.384 ms 00:28:56.589 [2024-12-09 23:10:23.907745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.848 [2024-12-09 23:10:23.932598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.848 [2024-12-09 23:10:23.932697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:56.848 [2024-12-09 23:10:23.932719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.796 ms 00:28:56.848 [2024-12-09 23:10:23.932731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.848 [2024-12-09 23:10:23.932984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.848 [2024-12-09 23:10:23.932999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:56.848 [2024-12-09 23:10:23.933019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:28:56.848 [2024-12-09 23:10:23.933030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.848 [2024-12-09 23:10:23.973377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.848 [2024-12-09 23:10:23.973719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:56.848 [2024-12-09 23:10:23.973758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.377 ms 00:28:56.848 [2024-12-09 23:10:23.973769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.848 [2024-12-09 23:10:24.013836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.848 [2024-12-09 23:10:24.013928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:56.848 [2024-12-09 23:10:24.013949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.038 ms 00:28:56.848 [2024-12-09 23:10:24.013960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.848 [2024-12-09 23:10:24.052807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.848 [2024-12-09 23:10:24.053154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:56.848 [2024-12-09 23:10:24.053191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.805 ms 00:28:56.848 [2024-12-09 23:10:24.053203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.848 [2024-12-09 23:10:24.093137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.848 [2024-12-09 23:10:24.093248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:56.848 [2024-12-09 23:10:24.093276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.801 ms 00:28:56.848 [2024-12-09 23:10:24.093288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.848 [2024-12-09 23:10:24.093376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:56.848 [2024-12-09 23:10:24.093397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:56.848 [2024-12-09 23:10:24.093850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.093991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:56.849 [2024-12-09 23:10:24.094740] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:56.849 [2024-12-09 23:10:24.094754] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8faee0eb-aa25-4084-862f-1e016fe6ecac 00:28:56.849 [2024-12-09 23:10:24.094769] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:56.849 [2024-12-09 23:10:24.094782] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:56.849 [2024-12-09 23:10:24.094792] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:56.849 [2024-12-09 23:10:24.094807] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:56.849 [2024-12-09 23:10:24.094818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:56.849 [2024-12-09 23:10:24.094831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:56.849 [2024-12-09 23:10:24.094841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:56.849 [2024-12-09 23:10:24.094856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:56.849 [2024-12-09 23:10:24.094865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:56.849 [2024-12-09 23:10:24.094880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.849 [2024-12-09 23:10:24.094891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:56.849 [2024-12-09 23:10:24.094906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:28:56.849 [2024-12-09 23:10:24.094916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.849 [2024-12-09 23:10:24.115622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.849 [2024-12-09 23:10:24.115933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:56.849 [2024-12-09 23:10:24.115969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.628 ms 00:28:56.849 [2024-12-09 23:10:24.115980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.849 [2024-12-09 23:10:24.116656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:56.850 [2024-12-09 23:10:24.116671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:56.850 [2024-12-09 23:10:24.116685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:28:56.850 [2024-12-09 23:10:24.116696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.850 [2024-12-09 23:10:24.174426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.850 [2024-12-09 23:10:24.174758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:56.850 [2024-12-09 23:10:24.174799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.850 [2024-12-09 23:10:24.174811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.850 [2024-12-09 23:10:24.174902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.850 [2024-12-09 23:10:24.174914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:56.850 [2024-12-09 23:10:24.174927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.850 [2024-12-09 23:10:24.174938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.850 [2024-12-09 23:10:24.175079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.850 [2024-12-09 23:10:24.175093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:56.850 [2024-12-09 23:10:24.175108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.850 [2024-12-09 23:10:24.175119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:56.850 [2024-12-09 23:10:24.175141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:56.850 [2024-12-09 23:10:24.175153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:56.850 [2024-12-09 23:10:24.175175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:56.850 [2024-12-09 23:10:24.175186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.108 [2024-12-09 23:10:24.303753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.108 [2024-12-09 23:10:24.303874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:57.109 [2024-12-09 23:10:24.303914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.303926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.412877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.109 [2024-12-09 23:10:24.412964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:57.109 [2024-12-09 23:10:24.412984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.412995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.413148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.109 [2024-12-09 23:10:24.413162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:57.109 [2024-12-09 23:10:24.413177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.413188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.413248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.109 [2024-12-09 23:10:24.413261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:57.109 [2024-12-09 23:10:24.413275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.413286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.413430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.109 [2024-12-09 23:10:24.413446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:57.109 [2024-12-09 23:10:24.413498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.413509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.413569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.109 [2024-12-09 23:10:24.413583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:57.109 [2024-12-09 23:10:24.413596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.413606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.413651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.109 [2024-12-09 23:10:24.413667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:57.109 [2024-12-09 23:10:24.413681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.413703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.413752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:57.109 [2024-12-09 23:10:24.413764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:57.109 [2024-12-09 23:10:24.413777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:57.109 [2024-12-09 23:10:24.413787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:57.109 [2024-12-09 23:10:24.413969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 785.341 ms, result 0 00:28:57.109 true 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78244 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78244 ']' 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78244 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78244 00:28:57.366 killing process with pid 78244 00:28:57.366 Received shutdown signal, test time was about 4.000000 seconds 00:28:57.366 00:28:57.366 Latency(us) 00:28:57.366 [2024-12-09T23:10:24.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.366 [2024-12-09T23:10:24.702Z] =================================================================================================================== 00:28:57.366 [2024-12-09T23:10:24.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78244' 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78244 00:28:57.366 23:10:24 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78244 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:29:01.554 Remove shared memory files 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:29:01.554 00:29:01.554 real 0m26.326s 00:29:01.554 user 0m29.073s 00:29:01.554 sys 0m1.422s 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.554 23:10:28 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:01.554 ************************************ 00:29:01.554 END TEST ftl_bdevperf 00:29:01.554 ************************************ 00:29:01.554 23:10:28 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:29:01.554 23:10:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:01.554 23:10:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.554 23:10:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:01.554 ************************************ 00:29:01.554 START TEST ftl_trim 00:29:01.554 ************************************ 00:29:01.554 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:29:01.555 * Looking for test storage... 00:29:01.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.555 23:10:28 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.555 --rc genhtml_branch_coverage=1 00:29:01.555 --rc genhtml_function_coverage=1 00:29:01.555 --rc genhtml_legend=1 00:29:01.555 --rc geninfo_all_blocks=1 00:29:01.555 --rc geninfo_unexecuted_blocks=1 00:29:01.555 00:29:01.555 ' 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.555 --rc genhtml_branch_coverage=1 00:29:01.555 --rc genhtml_function_coverage=1 00:29:01.555 --rc genhtml_legend=1 00:29:01.555 --rc geninfo_all_blocks=1 00:29:01.555 --rc geninfo_unexecuted_blocks=1 00:29:01.555 00:29:01.555 ' 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.555 --rc genhtml_branch_coverage=1 00:29:01.555 --rc genhtml_function_coverage=1 00:29:01.555 --rc genhtml_legend=1 00:29:01.555 --rc geninfo_all_blocks=1 00:29:01.555 --rc geninfo_unexecuted_blocks=1 00:29:01.555 00:29:01.555 ' 00:29:01.555 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:01.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.555 --rc genhtml_branch_coverage=1 00:29:01.555 --rc genhtml_function_coverage=1 00:29:01.555 --rc genhtml_legend=1 00:29:01.555 --rc geninfo_all_blocks=1 00:29:01.555 --rc geninfo_unexecuted_blocks=1 00:29:01.555 00:29:01.555 ' 00:29:01.555 23:10:28 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78609 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:29:01.814 23:10:28 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78609 00:29:01.814 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78609 ']' 00:29:01.814 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.814 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.814 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.814 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.814 23:10:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:01.814 [2024-12-09 23:10:29.038893] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:29:01.814 [2024-12-09 23:10:29.039911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78609 ] 00:29:02.079 [2024-12-09 23:10:29.243040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:02.079 [2024-12-09 23:10:29.390613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.079 [2024-12-09 23:10:29.390754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.079 [2024-12-09 23:10:29.390789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.456 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.456 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:29:03.456 23:10:30 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:03.456 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:29:03.456 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:03.456 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:29:03.456 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:29:03.456 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:03.715 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:03.715 { 00:29:03.715 "name": "nvme0n1", 00:29:03.715 "aliases": [ 00:29:03.715 "a5d1a748-7d17-4362-a621-0d30247233f3" 00:29:03.715 ], 00:29:03.715 "product_name": "NVMe disk", 00:29:03.715 "block_size": 4096, 00:29:03.715 "num_blocks": 1310720, 00:29:03.715 "uuid": "a5d1a748-7d17-4362-a621-0d30247233f3", 00:29:03.715 "numa_id": -1, 00:29:03.715 "assigned_rate_limits": { 00:29:03.715 "rw_ios_per_sec": 0, 00:29:03.715 "rw_mbytes_per_sec": 0, 00:29:03.715 "r_mbytes_per_sec": 0, 00:29:03.715 "w_mbytes_per_sec": 0 00:29:03.715 }, 00:29:03.715 "claimed": true, 00:29:03.715 "claim_type": "read_many_write_one", 00:29:03.715 "zoned": false, 00:29:03.715 "supported_io_types": { 00:29:03.715 "read": true, 00:29:03.715 "write": true, 00:29:03.715 "unmap": true, 00:29:03.715 "flush": true, 00:29:03.715 "reset": true, 00:29:03.715 "nvme_admin": true, 00:29:03.715 "nvme_io": true, 00:29:03.715 "nvme_io_md": false, 00:29:03.715 "write_zeroes": true, 00:29:03.715 "zcopy": false, 00:29:03.715 "get_zone_info": false, 00:29:03.715 "zone_management": false, 00:29:03.715 "zone_append": false, 00:29:03.715 "compare": true, 00:29:03.715 "compare_and_write": false, 00:29:03.715 "abort": true, 00:29:03.715 "seek_hole": false, 00:29:03.715 "seek_data": false, 00:29:03.715 "copy": true, 00:29:03.715 "nvme_iov_md": false 00:29:03.715 }, 00:29:03.715 "driver_specific": { 00:29:03.715 "nvme": [ 00:29:03.715 { 00:29:03.715 "pci_address": "0000:00:11.0", 00:29:03.715 "trid": { 00:29:03.715 "trtype": "PCIe", 00:29:03.716 "traddr": "0000:00:11.0" 00:29:03.716 }, 00:29:03.716 "ctrlr_data": { 00:29:03.716 "cntlid": 0, 00:29:03.716 "vendor_id": "0x1b36", 00:29:03.716 "model_number": "QEMU NVMe Ctrl", 00:29:03.716 "serial_number": "12341", 00:29:03.716 "firmware_revision": "8.0.0", 00:29:03.716 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:03.716 "oacs": { 00:29:03.716 "security": 0, 00:29:03.716 "format": 1, 00:29:03.716 "firmware": 0, 00:29:03.716 "ns_manage": 1 00:29:03.716 }, 00:29:03.716 "multi_ctrlr": false, 00:29:03.716 "ana_reporting": false 00:29:03.716 }, 00:29:03.716 "vs": { 00:29:03.716 "nvme_version": "1.4" 00:29:03.716 }, 00:29:03.716 "ns_data": { 00:29:03.716 "id": 1, 00:29:03.716 "can_share": false 00:29:03.716 } 00:29:03.716 } 00:29:03.716 ], 00:29:03.716 "mp_policy": "active_passive" 00:29:03.716 } 00:29:03.716 } 00:29:03.716 ]' 00:29:03.716 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:03.716 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:29:03.716 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:03.716 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:03.716 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:03.716 23:10:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:29:03.716 23:10:30 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:29:03.716 23:10:30 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:03.716 23:10:30 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:29:03.716 23:10:30 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:03.716 23:10:30 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:03.974 23:10:31 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ae0437df-97c0-43bf-90c9-b96d68506dbe 00:29:03.974 23:10:31 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:29:03.974 23:10:31 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae0437df-97c0-43bf-90c9-b96d68506dbe 00:29:04.237 23:10:31 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:04.501 23:10:31 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b1efb439-0089-4b2f-a963-a315768df67c 00:29:04.501 23:10:31 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b1efb439-0089-4b2f-a963-a315768df67c 00:29:04.759 23:10:31 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:04.759 23:10:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:04.759 23:10:31 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:29:04.759 23:10:31 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:04.759 23:10:31 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:04.759 23:10:31 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:29:04.759 23:10:31 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:04.759 23:10:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:04.759 23:10:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:04.759 23:10:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:29:04.759 23:10:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:29:04.759 23:10:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:05.018 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:05.018 { 00:29:05.018 "name": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:05.018 "aliases": [ 00:29:05.018 "lvs/nvme0n1p0" 00:29:05.018 ], 00:29:05.018 "product_name": "Logical Volume", 00:29:05.018 "block_size": 4096, 00:29:05.018 "num_blocks": 26476544, 00:29:05.018 "uuid": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:05.018 "assigned_rate_limits": { 00:29:05.018 "rw_ios_per_sec": 0, 00:29:05.018 "rw_mbytes_per_sec": 0, 00:29:05.018 "r_mbytes_per_sec": 0, 00:29:05.018 "w_mbytes_per_sec": 0 00:29:05.018 }, 00:29:05.018 "claimed": false, 00:29:05.018 "zoned": false, 00:29:05.018 "supported_io_types": { 00:29:05.018 "read": true, 00:29:05.018 "write": true, 00:29:05.018 "unmap": true, 00:29:05.018 "flush": false, 00:29:05.018 "reset": true, 00:29:05.018 "nvme_admin": false, 00:29:05.018 "nvme_io": false, 00:29:05.018 "nvme_io_md": false, 00:29:05.018 "write_zeroes": true, 00:29:05.018 "zcopy": false, 00:29:05.018 "get_zone_info": false, 00:29:05.018 "zone_management": false, 00:29:05.018 "zone_append": false, 00:29:05.018 "compare": false, 00:29:05.018 "compare_and_write": false, 00:29:05.018 "abort": false, 00:29:05.018 "seek_hole": true, 00:29:05.018 "seek_data": true, 00:29:05.018 "copy": false, 00:29:05.018 "nvme_iov_md": false 00:29:05.018 }, 00:29:05.018 "driver_specific": { 00:29:05.018 "lvol": { 00:29:05.018 "lvol_store_uuid": "b1efb439-0089-4b2f-a963-a315768df67c", 00:29:05.018 "base_bdev": "nvme0n1", 00:29:05.018 "thin_provision": true, 00:29:05.018 "num_allocated_clusters": 0, 00:29:05.018 "snapshot": false, 00:29:05.018 "clone": false, 00:29:05.018 "esnap_clone": false 00:29:05.018 } 00:29:05.018 } 00:29:05.018 } 00:29:05.018 ]' 00:29:05.018 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:05.018 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:29:05.018 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:05.018 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:05.018 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:05.018 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:29:05.018 23:10:32 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:29:05.018 23:10:32 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:29:05.018 23:10:32 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:05.277 23:10:32 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:05.277 23:10:32 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:05.277 23:10:32 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:05.277 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:05.277 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:05.277 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:29:05.277 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:29:05.277 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:05.535 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:05.535 { 00:29:05.535 "name": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:05.535 "aliases": [ 00:29:05.535 "lvs/nvme0n1p0" 00:29:05.535 ], 00:29:05.535 "product_name": "Logical Volume", 00:29:05.535 "block_size": 4096, 00:29:05.535 "num_blocks": 26476544, 00:29:05.535 "uuid": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:05.535 "assigned_rate_limits": { 00:29:05.535 "rw_ios_per_sec": 0, 00:29:05.535 "rw_mbytes_per_sec": 0, 00:29:05.535 "r_mbytes_per_sec": 0, 00:29:05.535 "w_mbytes_per_sec": 0 00:29:05.535 }, 00:29:05.535 "claimed": false, 00:29:05.535 "zoned": false, 00:29:05.535 "supported_io_types": { 00:29:05.535 "read": true, 00:29:05.535 "write": true, 00:29:05.535 "unmap": true, 00:29:05.535 "flush": false, 00:29:05.535 "reset": true, 00:29:05.535 "nvme_admin": false, 00:29:05.535 "nvme_io": false, 00:29:05.535 "nvme_io_md": false, 00:29:05.535 "write_zeroes": true, 00:29:05.535 "zcopy": false, 00:29:05.535 "get_zone_info": false, 00:29:05.535 "zone_management": false, 00:29:05.535 "zone_append": false, 00:29:05.535 "compare": false, 00:29:05.535 "compare_and_write": false, 00:29:05.535 "abort": false, 00:29:05.535 "seek_hole": true, 00:29:05.535 "seek_data": true, 00:29:05.535 "copy": false, 00:29:05.535 "nvme_iov_md": false 00:29:05.535 }, 00:29:05.535 "driver_specific": { 00:29:05.535 "lvol": { 00:29:05.535 "lvol_store_uuid": "b1efb439-0089-4b2f-a963-a315768df67c", 00:29:05.535 "base_bdev": "nvme0n1", 00:29:05.535 "thin_provision": true, 00:29:05.535 "num_allocated_clusters": 0, 00:29:05.535 "snapshot": false, 00:29:05.535 "clone": false, 00:29:05.535 "esnap_clone": false 00:29:05.535 } 00:29:05.535 } 00:29:05.535 } 00:29:05.535 ]' 00:29:05.535 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:05.535 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:29:05.535 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:05.535 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:05.535 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:05.535 23:10:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:29:05.535 23:10:32 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:29:05.535 23:10:32 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:05.794 23:10:33 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:29:05.794 23:10:33 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:29:05.794 23:10:33 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:05.794 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:05.794 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:05.794 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:29:05.794 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:29:05.794 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57a90c22-4526-4328-886d-d8c0e25b5f63 00:29:06.052 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:06.052 { 00:29:06.052 "name": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:06.052 "aliases": [ 00:29:06.052 "lvs/nvme0n1p0" 00:29:06.052 ], 00:29:06.052 "product_name": "Logical Volume", 00:29:06.052 "block_size": 4096, 00:29:06.052 "num_blocks": 26476544, 00:29:06.052 "uuid": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:06.052 "assigned_rate_limits": { 00:29:06.052 "rw_ios_per_sec": 0, 00:29:06.052 "rw_mbytes_per_sec": 0, 00:29:06.052 "r_mbytes_per_sec": 0, 00:29:06.052 "w_mbytes_per_sec": 0 00:29:06.052 }, 00:29:06.052 "claimed": false, 00:29:06.052 "zoned": false, 00:29:06.052 "supported_io_types": { 00:29:06.052 "read": true, 00:29:06.052 "write": true, 00:29:06.052 "unmap": true, 00:29:06.052 "flush": false, 00:29:06.052 "reset": true, 00:29:06.052 "nvme_admin": false, 00:29:06.052 "nvme_io": false, 00:29:06.052 "nvme_io_md": false, 00:29:06.052 "write_zeroes": true, 00:29:06.052 "zcopy": false, 00:29:06.052 "get_zone_info": false, 00:29:06.052 "zone_management": false, 00:29:06.052 "zone_append": false, 00:29:06.052 "compare": false, 00:29:06.052 "compare_and_write": false, 00:29:06.052 "abort": false, 00:29:06.052 "seek_hole": true, 00:29:06.052 "seek_data": true, 00:29:06.052 "copy": false, 00:29:06.052 "nvme_iov_md": false 00:29:06.052 }, 00:29:06.052 "driver_specific": { 00:29:06.052 "lvol": { 00:29:06.052 "lvol_store_uuid": "b1efb439-0089-4b2f-a963-a315768df67c", 00:29:06.052 "base_bdev": "nvme0n1", 00:29:06.052 "thin_provision": true, 00:29:06.052 "num_allocated_clusters": 0, 00:29:06.052 "snapshot": false, 00:29:06.052 "clone": false, 00:29:06.052 "esnap_clone": false 00:29:06.052 } 00:29:06.052 } 00:29:06.052 } 00:29:06.052 ]' 00:29:06.052 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:06.052 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:29:06.052 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:06.052 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:29:06.052 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:29:06.052 23:10:33 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:29:06.052 23:10:33 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:29:06.052 23:10:33 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 57a90c22-4526-4328-886d-d8c0e25b5f63 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:29:06.312 [2024-12-09 23:10:33.579989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.580061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:06.312 [2024-12-09 23:10:33.580084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:06.312 [2024-12-09 23:10:33.580103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.583601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.583658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.312 [2024-12-09 23:10:33.583675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.463 ms 00:29:06.312 [2024-12-09 23:10:33.583686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.583838] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:06.312 [2024-12-09 23:10:33.584811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:06.312 [2024-12-09 23:10:33.584850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.584862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.312 [2024-12-09 23:10:33.584877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:29:06.312 [2024-12-09 23:10:33.584887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.585008] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:29:06.312 [2024-12-09 23:10:33.587174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.587218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:06.312 [2024-12-09 23:10:33.587232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:29:06.312 [2024-12-09 23:10:33.587245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.599826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.599894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.312 [2024-12-09 23:10:33.599914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.508 ms 00:29:06.312 [2024-12-09 23:10:33.599927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.600137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.600157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.312 [2024-12-09 23:10:33.600170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:06.312 [2024-12-09 23:10:33.600189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.600230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.600244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:06.312 [2024-12-09 23:10:33.600255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:06.312 [2024-12-09 23:10:33.600287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.600333] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:06.312 [2024-12-09 23:10:33.605411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.605483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.312 [2024-12-09 23:10:33.605502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.090 ms 00:29:06.312 [2024-12-09 23:10:33.605514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.605635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.605669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:06.312 [2024-12-09 23:10:33.605684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:06.312 [2024-12-09 23:10:33.605695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.605735] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:06.312 [2024-12-09 23:10:33.605874] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:06.312 [2024-12-09 23:10:33.605895] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:06.312 [2024-12-09 23:10:33.605910] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:06.312 [2024-12-09 23:10:33.605927] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:06.312 [2024-12-09 23:10:33.605939] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:06.312 [2024-12-09 23:10:33.605953] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:06.312 [2024-12-09 23:10:33.605964] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:06.312 [2024-12-09 23:10:33.605978] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:06.312 [2024-12-09 23:10:33.605991] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:06.312 [2024-12-09 23:10:33.606005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.606016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:06.312 [2024-12-09 23:10:33.606029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:29:06.312 [2024-12-09 23:10:33.606040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.312 [2024-12-09 23:10:33.606131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.312 [2024-12-09 23:10:33.606143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:06.312 [2024-12-09 23:10:33.606158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:06.312 [2024-12-09 23:10:33.606168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.313 [2024-12-09 23:10:33.606299] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:06.313 [2024-12-09 23:10:33.606313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:06.313 [2024-12-09 23:10:33.606327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:06.313 [2024-12-09 23:10:33.606362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:06.313 [2024-12-09 23:10:33.606396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.313 [2024-12-09 23:10:33.606419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:06.313 [2024-12-09 23:10:33.606430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:06.313 [2024-12-09 23:10:33.606441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.313 [2024-12-09 23:10:33.606470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:06.313 [2024-12-09 23:10:33.606483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:06.313 [2024-12-09 23:10:33.606492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:06.313 [2024-12-09 23:10:33.606518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:06.313 [2024-12-09 23:10:33.606553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:06.313 [2024-12-09 23:10:33.606586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:06.313 [2024-12-09 23:10:33.606620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:06.313 [2024-12-09 23:10:33.606651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:06.313 [2024-12-09 23:10:33.606686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.313 [2024-12-09 23:10:33.606707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:06.313 [2024-12-09 23:10:33.606716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:06.313 [2024-12-09 23:10:33.606729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.313 [2024-12-09 23:10:33.606738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:06.313 [2024-12-09 23:10:33.606750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:06.313 [2024-12-09 23:10:33.606760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:06.313 [2024-12-09 23:10:33.606781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:06.313 [2024-12-09 23:10:33.606793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606802] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:06.313 [2024-12-09 23:10:33.606815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:06.313 [2024-12-09 23:10:33.606826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.313 [2024-12-09 23:10:33.606848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:06.313 [2024-12-09 23:10:33.606862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:06.313 [2024-12-09 23:10:33.606873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:06.313 [2024-12-09 23:10:33.606885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:06.313 [2024-12-09 23:10:33.606894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:06.313 [2024-12-09 23:10:33.606906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:06.313 [2024-12-09 23:10:33.606925] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:06.313 [2024-12-09 23:10:33.606942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.313 [2024-12-09 23:10:33.606957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:06.313 [2024-12-09 23:10:33.606970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:06.313 [2024-12-09 23:10:33.606980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:06.313 [2024-12-09 23:10:33.606995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:06.313 [2024-12-09 23:10:33.607006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:06.313 [2024-12-09 23:10:33.607064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:06.313 [2024-12-09 23:10:33.607075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:06.313 [2024-12-09 23:10:33.607091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:06.313 [2024-12-09 23:10:33.607101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:06.313 [2024-12-09 23:10:33.607117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:06.313 [2024-12-09 23:10:33.607127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:06.313 [2024-12-09 23:10:33.607141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:06.313 [2024-12-09 23:10:33.607151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:06.313 [2024-12-09 23:10:33.607164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:06.313 [2024-12-09 23:10:33.607174] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:06.313 [2024-12-09 23:10:33.607193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.313 [2024-12-09 23:10:33.607204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:06.313 [2024-12-09 23:10:33.607218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:06.313 [2024-12-09 23:10:33.607229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:06.313 [2024-12-09 23:10:33.607243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:06.313 [2024-12-09 23:10:33.607254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.313 [2024-12-09 23:10:33.607267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:06.313 [2024-12-09 23:10:33.607278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.014 ms 00:29:06.313 [2024-12-09 23:10:33.607290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.313 [2024-12-09 23:10:33.607376] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:06.313 [2024-12-09 23:10:33.607395] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:09.603 [2024-12-09 23:10:36.920733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.603 [2024-12-09 23:10:36.920827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:09.603 [2024-12-09 23:10:36.920846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3318.734 ms 00:29:09.603 [2024-12-09 23:10:36.920860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.867 [2024-12-09 23:10:36.962806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:36.962887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:09.868 [2024-12-09 23:10:36.962906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.570 ms 00:29:09.868 [2024-12-09 23:10:36.962920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:36.963129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:36.963147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:09.868 [2024-12-09 23:10:36.963184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:09.868 [2024-12-09 23:10:36.963203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.028967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.029311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:09.868 [2024-12-09 23:10:37.029340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.827 ms 00:29:09.868 [2024-12-09 23:10:37.029358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.029502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.029519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:09.868 [2024-12-09 23:10:37.029531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:09.868 [2024-12-09 23:10:37.029545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.030348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.030369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:09.868 [2024-12-09 23:10:37.030380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:29:09.868 [2024-12-09 23:10:37.030393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.030536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.030552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:09.868 [2024-12-09 23:10:37.030583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:29:09.868 [2024-12-09 23:10:37.030601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.056248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.056547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:09.868 [2024-12-09 23:10:37.056577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.649 ms 00:29:09.868 [2024-12-09 23:10:37.056591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.074550] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:09.868 [2024-12-09 23:10:37.101789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.101869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:09.868 [2024-12-09 23:10:37.101889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.091 ms 00:29:09.868 [2024-12-09 23:10:37.101900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.195023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.195380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:09.868 [2024-12-09 23:10:37.195416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.106 ms 00:29:09.868 [2024-12-09 23:10:37.195428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.868 [2024-12-09 23:10:37.195742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.868 [2024-12-09 23:10:37.195759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:09.868 [2024-12-09 23:10:37.195779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:29:09.868 [2024-12-09 23:10:37.195790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.127 [2024-12-09 23:10:37.237684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.127 [2024-12-09 23:10:37.238025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:10.127 [2024-12-09 23:10:37.238064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.908 ms 00:29:10.127 [2024-12-09 23:10:37.238076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.127 [2024-12-09 23:10:37.279156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.127 [2024-12-09 23:10:37.279241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:10.127 [2024-12-09 23:10:37.279264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.969 ms 00:29:10.127 [2024-12-09 23:10:37.279275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.127 [2024-12-09 23:10:37.280183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.127 [2024-12-09 23:10:37.280216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:10.127 [2024-12-09 23:10:37.280232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:29:10.127 [2024-12-09 23:10:37.280244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.127 [2024-12-09 23:10:37.398166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.127 [2024-12-09 23:10:37.398259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:10.127 [2024-12-09 23:10:37.398285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.055 ms 00:29:10.127 [2024-12-09 23:10:37.398297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.127 [2024-12-09 23:10:37.441924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.127 [2024-12-09 23:10:37.442013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:10.127 [2024-12-09 23:10:37.442034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.462 ms 00:29:10.127 [2024-12-09 23:10:37.442046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.386 [2024-12-09 23:10:37.486079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.386 [2024-12-09 23:10:37.486181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:10.386 [2024-12-09 23:10:37.486202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.902 ms 00:29:10.386 [2024-12-09 23:10:37.486213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.386 [2024-12-09 23:10:37.529022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.386 [2024-12-09 23:10:37.529139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:10.386 [2024-12-09 23:10:37.529161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.694 ms 00:29:10.386 [2024-12-09 23:10:37.529172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.386 [2024-12-09 23:10:37.529350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.386 [2024-12-09 23:10:37.529369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:10.386 [2024-12-09 23:10:37.529389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:10.386 [2024-12-09 23:10:37.529400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.386 [2024-12-09 23:10:37.529527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.386 [2024-12-09 23:10:37.529541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:10.386 [2024-12-09 23:10:37.529555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:10.386 [2024-12-09 23:10:37.529570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.386 [2024-12-09 23:10:37.530952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:10.386 [2024-12-09 23:10:37.536633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3957.076 ms, result 0 00:29:10.386 [2024-12-09 23:10:37.537751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:10.386 { 00:29:10.386 "name": "ftl0", 00:29:10.386 "uuid": "e2a83606-4f0a-47b8-82fe-3fe8d4df16c8" 00:29:10.386 } 00:29:10.386 23:10:37 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:29:10.386 23:10:37 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:29:10.386 23:10:37 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:10.386 23:10:37 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:29:10.386 23:10:37 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:10.386 23:10:37 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:10.386 23:10:37 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:10.651 23:10:37 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:29:10.914 [ 00:29:10.914 { 00:29:10.914 "name": "ftl0", 00:29:10.914 "aliases": [ 00:29:10.914 "e2a83606-4f0a-47b8-82fe-3fe8d4df16c8" 00:29:10.914 ], 00:29:10.914 "product_name": "FTL disk", 00:29:10.914 "block_size": 4096, 00:29:10.914 "num_blocks": 23592960, 00:29:10.914 "uuid": "e2a83606-4f0a-47b8-82fe-3fe8d4df16c8", 00:29:10.914 "assigned_rate_limits": { 00:29:10.914 "rw_ios_per_sec": 0, 00:29:10.914 "rw_mbytes_per_sec": 0, 00:29:10.914 "r_mbytes_per_sec": 0, 00:29:10.914 "w_mbytes_per_sec": 0 00:29:10.914 }, 00:29:10.914 "claimed": false, 00:29:10.914 "zoned": false, 00:29:10.914 "supported_io_types": { 00:29:10.914 "read": true, 00:29:10.914 "write": true, 00:29:10.914 "unmap": true, 00:29:10.914 "flush": true, 00:29:10.914 "reset": false, 00:29:10.914 "nvme_admin": false, 00:29:10.914 "nvme_io": false, 00:29:10.914 "nvme_io_md": false, 00:29:10.914 "write_zeroes": true, 00:29:10.914 "zcopy": false, 00:29:10.914 "get_zone_info": false, 00:29:10.914 "zone_management": false, 00:29:10.914 "zone_append": false, 00:29:10.914 "compare": false, 00:29:10.914 "compare_and_write": false, 00:29:10.914 "abort": false, 00:29:10.914 "seek_hole": false, 00:29:10.914 "seek_data": false, 00:29:10.914 "copy": false, 00:29:10.914 "nvme_iov_md": false 00:29:10.914 }, 00:29:10.914 "driver_specific": { 00:29:10.914 "ftl": { 00:29:10.914 "base_bdev": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:10.914 "cache": "nvc0n1p0" 00:29:10.914 } 00:29:10.914 } 00:29:10.914 } 00:29:10.914 ] 00:29:10.914 23:10:38 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:29:10.914 23:10:38 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:29:10.914 23:10:38 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:10.914 23:10:38 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:29:10.914 23:10:38 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:29:11.172 23:10:38 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:29:11.173 { 00:29:11.173 "name": "ftl0", 00:29:11.173 "aliases": [ 00:29:11.173 "e2a83606-4f0a-47b8-82fe-3fe8d4df16c8" 00:29:11.173 ], 00:29:11.173 "product_name": "FTL disk", 00:29:11.173 "block_size": 4096, 00:29:11.173 "num_blocks": 23592960, 00:29:11.173 "uuid": "e2a83606-4f0a-47b8-82fe-3fe8d4df16c8", 00:29:11.173 "assigned_rate_limits": { 00:29:11.173 "rw_ios_per_sec": 0, 00:29:11.173 "rw_mbytes_per_sec": 0, 00:29:11.173 "r_mbytes_per_sec": 0, 00:29:11.173 "w_mbytes_per_sec": 0 00:29:11.173 }, 00:29:11.173 "claimed": false, 00:29:11.173 "zoned": false, 00:29:11.173 "supported_io_types": { 00:29:11.173 "read": true, 00:29:11.173 "write": true, 00:29:11.173 "unmap": true, 00:29:11.173 "flush": true, 00:29:11.173 "reset": false, 00:29:11.173 "nvme_admin": false, 00:29:11.173 "nvme_io": false, 00:29:11.173 "nvme_io_md": false, 00:29:11.173 "write_zeroes": true, 00:29:11.173 "zcopy": false, 00:29:11.173 "get_zone_info": false, 00:29:11.173 "zone_management": false, 00:29:11.173 "zone_append": false, 00:29:11.173 "compare": false, 00:29:11.173 "compare_and_write": false, 00:29:11.173 "abort": false, 00:29:11.173 "seek_hole": false, 00:29:11.173 "seek_data": false, 00:29:11.173 "copy": false, 00:29:11.173 "nvme_iov_md": false 00:29:11.173 }, 00:29:11.173 "driver_specific": { 00:29:11.173 "ftl": { 00:29:11.173 "base_bdev": "57a90c22-4526-4328-886d-d8c0e25b5f63", 00:29:11.173 "cache": "nvc0n1p0" 00:29:11.173 } 00:29:11.173 } 00:29:11.173 } 00:29:11.173 ]' 00:29:11.173 23:10:38 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:29:11.173 23:10:38 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:29:11.173 23:10:38 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:11.431 [2024-12-09 23:10:38.666751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.666837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.431 [2024-12-09 23:10:38.666860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:11.431 [2024-12-09 23:10:38.666878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.431 [2024-12-09 23:10:38.666920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:11.431 [2024-12-09 23:10:38.671109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.671161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.431 [2024-12-09 23:10:38.671188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.166 ms 00:29:11.431 [2024-12-09 23:10:38.671200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.431 [2024-12-09 23:10:38.671785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.671809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.431 [2024-12-09 23:10:38.671825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:29:11.431 [2024-12-09 23:10:38.671836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.431 [2024-12-09 23:10:38.674711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.674748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:11.431 [2024-12-09 23:10:38.674763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:29:11.431 [2024-12-09 23:10:38.674774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.431 [2024-12-09 23:10:38.680525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.680587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:11.431 [2024-12-09 23:10:38.680605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.677 ms 00:29:11.431 [2024-12-09 23:10:38.680617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.431 [2024-12-09 23:10:38.723154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.723237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:11.431 [2024-12-09 23:10:38.723265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.445 ms 00:29:11.431 [2024-12-09 23:10:38.723276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.431 [2024-12-09 23:10:38.748059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.748146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:11.431 [2024-12-09 23:10:38.748167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.616 ms 00:29:11.431 [2024-12-09 23:10:38.748184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.431 [2024-12-09 23:10:38.748523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.431 [2024-12-09 23:10:38.748540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:11.431 [2024-12-09 23:10:38.748556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:29:11.431 [2024-12-09 23:10:38.748566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.689 [2024-12-09 23:10:38.791187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.689 [2024-12-09 23:10:38.791269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:11.689 [2024-12-09 23:10:38.791291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.641 ms 00:29:11.689 [2024-12-09 23:10:38.791302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.689 [2024-12-09 23:10:38.833508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.689 [2024-12-09 23:10:38.833581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:11.689 [2024-12-09 23:10:38.833605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.070 ms 00:29:11.689 [2024-12-09 23:10:38.833616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.689 [2024-12-09 23:10:38.874831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.689 [2024-12-09 23:10:38.874915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:11.689 [2024-12-09 23:10:38.874936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.099 ms 00:29:11.689 [2024-12-09 23:10:38.874947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.689 [2024-12-09 23:10:38.916302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.689 [2024-12-09 23:10:38.916376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:11.689 [2024-12-09 23:10:38.916396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.177 ms 00:29:11.689 [2024-12-09 23:10:38.916407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.689 [2024-12-09 23:10:38.916578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:11.689 [2024-12-09 23:10:38.916600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:11.689 [2024-12-09 23:10:38.916889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.916996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:11.690 [2024-12-09 23:10:38.917949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:11.690 [2024-12-09 23:10:38.917965] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:29:11.690 [2024-12-09 23:10:38.917977] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:11.690 [2024-12-09 23:10:38.917990] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:11.690 [2024-12-09 23:10:38.918000] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:11.690 [2024-12-09 23:10:38.918018] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:11.690 [2024-12-09 23:10:38.918029] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:11.690 [2024-12-09 23:10:38.918042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:11.690 [2024-12-09 23:10:38.918052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:11.690 [2024-12-09 23:10:38.918064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:11.690 [2024-12-09 23:10:38.918073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:11.690 [2024-12-09 23:10:38.918087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.690 [2024-12-09 23:10:38.918100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:11.690 [2024-12-09 23:10:38.918114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:29:11.690 [2024-12-09 23:10:38.918126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.691 [2024-12-09 23:10:38.940179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.691 [2024-12-09 23:10:38.940251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:11.691 [2024-12-09 23:10:38.940275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.028 ms 00:29:11.691 [2024-12-09 23:10:38.940285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.691 [2024-12-09 23:10:38.941008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.691 [2024-12-09 23:10:38.941027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:11.691 [2024-12-09 23:10:38.941042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:29:11.691 [2024-12-09 23:10:38.941053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.691 [2024-12-09 23:10:39.014204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.691 [2024-12-09 23:10:39.014276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:11.691 [2024-12-09 23:10:39.014296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.691 [2024-12-09 23:10:39.014307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.691 [2024-12-09 23:10:39.014515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.691 [2024-12-09 23:10:39.014532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:11.691 [2024-12-09 23:10:39.014548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.691 [2024-12-09 23:10:39.014559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.691 [2024-12-09 23:10:39.014648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.691 [2024-12-09 23:10:39.014663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:11.691 [2024-12-09 23:10:39.014683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.691 [2024-12-09 23:10:39.014694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.691 [2024-12-09 23:10:39.014730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.691 [2024-12-09 23:10:39.014742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:11.691 [2024-12-09 23:10:39.014755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.691 [2024-12-09 23:10:39.014765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.154981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.155062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:11.949 [2024-12-09 23:10:39.155080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.155091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.260322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.260398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:11.949 [2024-12-09 23:10:39.260417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.260428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.260608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.260623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:11.949 [2024-12-09 23:10:39.260641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.260656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.260716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.260728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:11.949 [2024-12-09 23:10:39.260742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.260752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.260917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.260931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:11.949 [2024-12-09 23:10:39.260945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.260959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.261020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.261034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:11.949 [2024-12-09 23:10:39.261048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.261058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.261117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.261128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:11.949 [2024-12-09 23:10:39.261144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.261155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.261225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.949 [2024-12-09 23:10:39.261237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:11.949 [2024-12-09 23:10:39.261250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.949 [2024-12-09 23:10:39.261260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.949 [2024-12-09 23:10:39.261474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 595.666 ms, result 0 00:29:11.949 true 00:29:12.216 23:10:39 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78609 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78609 ']' 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78609 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78609 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.216 killing process with pid 78609 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78609' 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78609 00:29:12.216 23:10:39 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78609 00:29:14.760 23:10:41 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:29:15.716 65536+0 records in 00:29:15.716 65536+0 records out 00:29:15.716 268435456 bytes (268 MB, 256 MiB) copied, 1.04648 s, 257 MB/s 00:29:15.716 23:10:42 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:15.716 [2024-12-09 23:10:43.022007] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:29:15.716 [2024-12-09 23:10:43.022148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78810 ] 00:29:15.974 [2024-12-09 23:10:43.204281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.234 [2024-12-09 23:10:43.343650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.493 [2024-12-09 23:10:43.745672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.493 [2024-12-09 23:10:43.745783] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:16.752 [2024-12-09 23:10:43.910702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.910775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:16.753 [2024-12-09 23:10:43.910792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.753 [2024-12-09 23:10:43.910804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.914317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.914375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.753 [2024-12-09 23:10:43.914390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:29:16.753 [2024-12-09 23:10:43.914400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.914557] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:16.753 [2024-12-09 23:10:43.915637] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:16.753 [2024-12-09 23:10:43.915674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.915686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.753 [2024-12-09 23:10:43.915699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:29:16.753 [2024-12-09 23:10:43.915710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.917269] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:16.753 [2024-12-09 23:10:43.940078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.940152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:16.753 [2024-12-09 23:10:43.940170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.844 ms 00:29:16.753 [2024-12-09 23:10:43.940182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.940364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.940380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:16.753 [2024-12-09 23:10:43.940393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:16.753 [2024-12-09 23:10:43.940404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.951143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.951194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.753 [2024-12-09 23:10:43.951208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.681 ms 00:29:16.753 [2024-12-09 23:10:43.951219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.951388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.951407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.753 [2024-12-09 23:10:43.951419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:29:16.753 [2024-12-09 23:10:43.951430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.951492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.951505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:16.753 [2024-12-09 23:10:43.951515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:16.753 [2024-12-09 23:10:43.951526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.951553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:16.753 [2024-12-09 23:10:43.957245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.957289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.753 [2024-12-09 23:10:43.957303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.709 ms 00:29:16.753 [2024-12-09 23:10:43.957314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.957392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.957405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:16.753 [2024-12-09 23:10:43.957417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:16.753 [2024-12-09 23:10:43.957428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.957467] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:16.753 [2024-12-09 23:10:43.957493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:16.753 [2024-12-09 23:10:43.957531] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:16.753 [2024-12-09 23:10:43.957550] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:16.753 [2024-12-09 23:10:43.957641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:16.753 [2024-12-09 23:10:43.957655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:16.753 [2024-12-09 23:10:43.957669] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:16.753 [2024-12-09 23:10:43.957685] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:16.753 [2024-12-09 23:10:43.957698] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:16.753 [2024-12-09 23:10:43.957711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:16.753 [2024-12-09 23:10:43.957721] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:16.753 [2024-12-09 23:10:43.957732] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:16.753 [2024-12-09 23:10:43.957742] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:16.753 [2024-12-09 23:10:43.957753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.957763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:16.753 [2024-12-09 23:10:43.957774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:29:16.753 [2024-12-09 23:10:43.957784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.957861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.753 [2024-12-09 23:10:43.957876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:16.753 [2024-12-09 23:10:43.957886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:16.753 [2024-12-09 23:10:43.957896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.753 [2024-12-09 23:10:43.957986] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:16.753 [2024-12-09 23:10:43.957998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:16.753 [2024-12-09 23:10:43.958009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.753 [2024-12-09 23:10:43.958020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:16.753 [2024-12-09 23:10:43.958040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:16.753 [2024-12-09 23:10:43.958066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:16.753 [2024-12-09 23:10:43.958077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.753 [2024-12-09 23:10:43.958095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:16.753 [2024-12-09 23:10:43.958120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:16.753 [2024-12-09 23:10:43.958130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:16.753 [2024-12-09 23:10:43.958140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:16.753 [2024-12-09 23:10:43.958149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:16.753 [2024-12-09 23:10:43.958159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:16.753 [2024-12-09 23:10:43.958178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:16.753 [2024-12-09 23:10:43.958187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:16.753 [2024-12-09 23:10:43.958207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.753 [2024-12-09 23:10:43.958225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:16.753 [2024-12-09 23:10:43.958234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.753 [2024-12-09 23:10:43.958253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:16.753 [2024-12-09 23:10:43.958262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.753 [2024-12-09 23:10:43.958280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:16.753 [2024-12-09 23:10:43.958290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:16.753 [2024-12-09 23:10:43.958308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:16.753 [2024-12-09 23:10:43.958317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.753 [2024-12-09 23:10:43.958335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:16.753 [2024-12-09 23:10:43.958345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:16.753 [2024-12-09 23:10:43.958354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:16.753 [2024-12-09 23:10:43.958363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:16.753 [2024-12-09 23:10:43.958372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:16.753 [2024-12-09 23:10:43.958384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.753 [2024-12-09 23:10:43.958393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:16.753 [2024-12-09 23:10:43.958402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:16.753 [2024-12-09 23:10:43.958412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.754 [2024-12-09 23:10:43.958421] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:16.754 [2024-12-09 23:10:43.958431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:16.754 [2024-12-09 23:10:43.958446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:16.754 [2024-12-09 23:10:43.958790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:16.754 [2024-12-09 23:10:43.958826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:16.754 [2024-12-09 23:10:43.958856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:16.754 [2024-12-09 23:10:43.958885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:16.754 [2024-12-09 23:10:43.958916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:16.754 [2024-12-09 23:10:43.958945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:16.754 [2024-12-09 23:10:43.958974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:16.754 [2024-12-09 23:10:43.959048] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:16.754 [2024-12-09 23:10:43.959106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.754 [2024-12-09 23:10:43.959155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:16.754 [2024-12-09 23:10:43.959204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:16.754 [2024-12-09 23:10:43.959251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:16.754 [2024-12-09 23:10:43.959353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:16.754 [2024-12-09 23:10:43.959404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:16.754 [2024-12-09 23:10:43.959461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:16.754 [2024-12-09 23:10:43.959513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:16.754 [2024-12-09 23:10:43.959688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:16.754 [2024-12-09 23:10:43.959735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:16.754 [2024-12-09 23:10:43.959783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:16.754 [2024-12-09 23:10:43.959876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:16.754 [2024-12-09 23:10:43.959927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:16.754 [2024-12-09 23:10:43.960021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:16.754 [2024-12-09 23:10:43.960072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:16.754 [2024-12-09 23:10:43.960155] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:16.754 [2024-12-09 23:10:43.960173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:16.754 [2024-12-09 23:10:43.960195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:16.754 [2024-12-09 23:10:43.960207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:16.754 [2024-12-09 23:10:43.960218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:16.754 [2024-12-09 23:10:43.960228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:16.754 [2024-12-09 23:10:43.960242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.754 [2024-12-09 23:10:43.960259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:16.754 [2024-12-09 23:10:43.960271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.314 ms 00:29:16.754 [2024-12-09 23:10:43.960282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.754 [2024-12-09 23:10:44.006815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.754 [2024-12-09 23:10:44.006882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.754 [2024-12-09 23:10:44.006899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.521 ms 00:29:16.754 [2024-12-09 23:10:44.006911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.754 [2024-12-09 23:10:44.007118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.754 [2024-12-09 23:10:44.007132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:16.754 [2024-12-09 23:10:44.007143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:16.754 [2024-12-09 23:10:44.007154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.754 [2024-12-09 23:10:44.069762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.754 [2024-12-09 23:10:44.069833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.754 [2024-12-09 23:10:44.069849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.681 ms 00:29:16.754 [2024-12-09 23:10:44.069861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.754 [2024-12-09 23:10:44.069994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.754 [2024-12-09 23:10:44.070007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.754 [2024-12-09 23:10:44.070019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:16.754 [2024-12-09 23:10:44.070030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.754 [2024-12-09 23:10:44.070839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.754 [2024-12-09 23:10:44.070858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.754 [2024-12-09 23:10:44.070877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:29:16.754 [2024-12-09 23:10:44.070889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.754 [2024-12-09 23:10:44.071026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.754 [2024-12-09 23:10:44.071041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.754 [2024-12-09 23:10:44.071053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:29:16.754 [2024-12-09 23:10:44.071063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.093052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.093122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:17.013 [2024-12-09 23:10:44.093139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.998 ms 00:29:17.013 [2024-12-09 23:10:44.093151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.114892] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:17.013 [2024-12-09 23:10:44.114988] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:17.013 [2024-12-09 23:10:44.115010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.115023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:17.013 [2024-12-09 23:10:44.115037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.722 ms 00:29:17.013 [2024-12-09 23:10:44.115047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.146447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.146843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:17.013 [2024-12-09 23:10:44.146878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.261 ms 00:29:17.013 [2024-12-09 23:10:44.146890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.166973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.167047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:17.013 [2024-12-09 23:10:44.167063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.956 ms 00:29:17.013 [2024-12-09 23:10:44.167075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.186684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.186750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:17.013 [2024-12-09 23:10:44.186768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.468 ms 00:29:17.013 [2024-12-09 23:10:44.186778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.187629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.187658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:17.013 [2024-12-09 23:10:44.187672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:29:17.013 [2024-12-09 23:10:44.187683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.278902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.278978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:17.013 [2024-12-09 23:10:44.278997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.324 ms 00:29:17.013 [2024-12-09 23:10:44.279009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.293232] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:17.013 [2024-12-09 23:10:44.310608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.310679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:17.013 [2024-12-09 23:10:44.310699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.488 ms 00:29:17.013 [2024-12-09 23:10:44.310710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.310853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.310868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:17.013 [2024-12-09 23:10:44.310880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:17.013 [2024-12-09 23:10:44.310890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.310951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.310963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:17.013 [2024-12-09 23:10:44.310974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:17.013 [2024-12-09 23:10:44.310985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.311024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.311044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:17.013 [2024-12-09 23:10:44.311055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:17.013 [2024-12-09 23:10:44.311065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.013 [2024-12-09 23:10:44.311105] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:17.013 [2024-12-09 23:10:44.311118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.013 [2024-12-09 23:10:44.311128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:17.013 [2024-12-09 23:10:44.311139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:17.013 [2024-12-09 23:10:44.311149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.270 [2024-12-09 23:10:44.349501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.270 [2024-12-09 23:10:44.349574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:17.270 [2024-12-09 23:10:44.349593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.387 ms 00:29:17.270 [2024-12-09 23:10:44.349604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.270 [2024-12-09 23:10:44.349798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:17.270 [2024-12-09 23:10:44.349814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:17.270 [2024-12-09 23:10:44.349827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:17.270 [2024-12-09 23:10:44.349838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:17.270 [2024-12-09 23:10:44.351141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:17.270 [2024-12-09 23:10:44.356071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.813 ms, result 0 00:29:17.270 [2024-12-09 23:10:44.357109] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:17.270 [2024-12-09 23:10:44.376873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:18.206  [2024-12-09T23:10:46.473Z] Copying: 24/256 [MB] (24 MBps) [2024-12-09T23:10:47.410Z] Copying: 50/256 [MB] (25 MBps) [2024-12-09T23:10:48.786Z] Copying: 75/256 [MB] (25 MBps) [2024-12-09T23:10:49.409Z] Copying: 101/256 [MB] (25 MBps) [2024-12-09T23:10:50.784Z] Copying: 126/256 [MB] (25 MBps) [2024-12-09T23:10:51.720Z] Copying: 150/256 [MB] (24 MBps) [2024-12-09T23:10:52.655Z] Copying: 175/256 [MB] (24 MBps) [2024-12-09T23:10:53.589Z] Copying: 200/256 [MB] (25 MBps) [2024-12-09T23:10:54.526Z] Copying: 226/256 [MB] (25 MBps) [2024-12-09T23:10:54.786Z] Copying: 251/256 [MB] (25 MBps) [2024-12-09T23:10:54.786Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-09 23:10:54.551568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:27.450 [2024-12-09 23:10:54.566437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.566783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:27.450 [2024-12-09 23:10:54.566813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:27.450 [2024-12-09 23:10:54.566836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.566885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:27.450 [2024-12-09 23:10:54.571490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.571528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:27.450 [2024-12-09 23:10:54.571542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:29:27.450 [2024-12-09 23:10:54.571553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.573711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.573879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:27.450 [2024-12-09 23:10:54.573905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:29:27.450 [2024-12-09 23:10:54.573916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.580923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.581119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:27.450 [2024-12-09 23:10:54.581144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.982 ms 00:29:27.450 [2024-12-09 23:10:54.581155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.586819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.586866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:27.450 [2024-12-09 23:10:54.586880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.624 ms 00:29:27.450 [2024-12-09 23:10:54.586891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.625812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.625889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:27.450 [2024-12-09 23:10:54.625907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.896 ms 00:29:27.450 [2024-12-09 23:10:54.625918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.648858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.649152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:27.450 [2024-12-09 23:10:54.649253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.869 ms 00:29:27.450 [2024-12-09 23:10:54.649290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.649507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.649664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:27.450 [2024-12-09 23:10:54.649735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:27.450 [2024-12-09 23:10:54.649760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.689796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.689874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:27.450 [2024-12-09 23:10:54.689891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.072 ms 00:29:27.450 [2024-12-09 23:10:54.689902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.729007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.729329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:27.450 [2024-12-09 23:10:54.729359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.057 ms 00:29:27.450 [2024-12-09 23:10:54.729370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.450 [2024-12-09 23:10:54.767290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.450 [2024-12-09 23:10:54.767366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:27.450 [2024-12-09 23:10:54.767383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.858 ms 00:29:27.450 [2024-12-09 23:10:54.767394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.710 [2024-12-09 23:10:54.806244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.710 [2024-12-09 23:10:54.806565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:27.710 [2024-12-09 23:10:54.806595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.743 ms 00:29:27.710 [2024-12-09 23:10:54.806607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.710 [2024-12-09 23:10:54.806710] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:27.710 [2024-12-09 23:10:54.806731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.806997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:27.710 [2024-12-09 23:10:54.807644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:27.711 [2024-12-09 23:10:54.807864] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:27.711 [2024-12-09 23:10:54.807874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:29:27.711 [2024-12-09 23:10:54.807886] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:27.711 [2024-12-09 23:10:54.807897] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:27.711 [2024-12-09 23:10:54.807906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:27.711 [2024-12-09 23:10:54.807917] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:27.711 [2024-12-09 23:10:54.807927] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:27.711 [2024-12-09 23:10:54.807938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:27.711 [2024-12-09 23:10:54.807948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:27.711 [2024-12-09 23:10:54.807957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:27.711 [2024-12-09 23:10:54.807968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:27.711 [2024-12-09 23:10:54.807979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.711 [2024-12-09 23:10:54.807996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:27.711 [2024-12-09 23:10:54.808007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:29:27.711 [2024-12-09 23:10:54.808018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.711 [2024-12-09 23:10:54.828697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.711 [2024-12-09 23:10:54.828957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:27.711 [2024-12-09 23:10:54.829070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.680 ms 00:29:27.711 [2024-12-09 23:10:54.829108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.711 [2024-12-09 23:10:54.829809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.711 [2024-12-09 23:10:54.829927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:27.711 [2024-12-09 23:10:54.830005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:29:27.711 [2024-12-09 23:10:54.830039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.711 [2024-12-09 23:10:54.888622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.711 [2024-12-09 23:10:54.888898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.711 [2024-12-09 23:10:54.888988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.711 [2024-12-09 23:10:54.889024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.711 [2024-12-09 23:10:54.889178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.711 [2024-12-09 23:10:54.889253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.711 [2024-12-09 23:10:54.889291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.711 [2024-12-09 23:10:54.889321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.711 [2024-12-09 23:10:54.889493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.711 [2024-12-09 23:10:54.889541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.711 [2024-12-09 23:10:54.889627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.711 [2024-12-09 23:10:54.889662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.711 [2024-12-09 23:10:54.889707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.711 [2024-12-09 23:10:54.889780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.711 [2024-12-09 23:10:54.889815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.711 [2024-12-09 23:10:54.889881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.711 [2024-12-09 23:10:55.021769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.711 [2024-12-09 23:10:55.021997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:27.711 [2024-12-09 23:10:55.022124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.711 [2024-12-09 23:10:55.022162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.127763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.988 [2024-12-09 23:10:55.128021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:27.988 [2024-12-09 23:10:55.128173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.988 [2024-12-09 23:10:55.128211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.128338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.988 [2024-12-09 23:10:55.128373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:27.988 [2024-12-09 23:10:55.128404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.988 [2024-12-09 23:10:55.128517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.128590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.988 [2024-12-09 23:10:55.128625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:27.988 [2024-12-09 23:10:55.128666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.988 [2024-12-09 23:10:55.128696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.128982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.988 [2024-12-09 23:10:55.129020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:27.988 [2024-12-09 23:10:55.129108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.988 [2024-12-09 23:10:55.129143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.129220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.988 [2024-12-09 23:10:55.129255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:27.988 [2024-12-09 23:10:55.129336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.988 [2024-12-09 23:10:55.129376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.129441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.988 [2024-12-09 23:10:55.129490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:27.988 [2024-12-09 23:10:55.129522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.988 [2024-12-09 23:10:55.129630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.129701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.988 [2024-12-09 23:10:55.129735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:27.988 [2024-12-09 23:10:55.129771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.988 [2024-12-09 23:10:55.129857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.988 [2024-12-09 23:10:55.130057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 564.537 ms, result 0 00:29:29.380 00:29:29.380 00:29:29.380 23:10:56 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78955 00:29:29.380 23:10:56 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:29:29.380 23:10:56 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78955 00:29:29.380 23:10:56 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78955 ']' 00:29:29.380 23:10:56 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.380 23:10:56 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.380 23:10:56 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.380 23:10:56 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.380 23:10:56 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:29.380 [2024-12-09 23:10:56.533065] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:29:29.380 [2024-12-09 23:10:56.533210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78955 ] 00:29:29.638 [2024-12-09 23:10:56.717041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.638 [2024-12-09 23:10:56.852370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.574 23:10:57 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.574 23:10:57 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:29:30.574 23:10:57 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:29:30.840 [2024-12-09 23:10:58.007601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:30.840 [2024-12-09 23:10:58.007969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:31.107 [2024-12-09 23:10:58.189043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.189375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:31.107 [2024-12-09 23:10:58.189507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:31.107 [2024-12-09 23:10:58.189553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.193223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.193427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:31.107 [2024-12-09 23:10:58.193472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.607 ms 00:29:31.107 [2024-12-09 23:10:58.193485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.193700] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:31.107 [2024-12-09 23:10:58.194774] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:31.107 [2024-12-09 23:10:58.194929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.194947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:31.107 [2024-12-09 23:10:58.194962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:29:31.107 [2024-12-09 23:10:58.194974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.197550] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:31.107 [2024-12-09 23:10:58.218289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.218693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:31.107 [2024-12-09 23:10:58.218727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.774 ms 00:29:31.107 [2024-12-09 23:10:58.218742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.218949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.218967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:31.107 [2024-12-09 23:10:58.218980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:31.107 [2024-12-09 23:10:58.218994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.231597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.231676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:31.107 [2024-12-09 23:10:58.231692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.560 ms 00:29:31.107 [2024-12-09 23:10:58.231705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.231876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.231896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:31.107 [2024-12-09 23:10:58.231908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:31.107 [2024-12-09 23:10:58.231927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.231956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.231970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:31.107 [2024-12-09 23:10:58.231981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:31.107 [2024-12-09 23:10:58.231994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.232023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:31.107 [2024-12-09 23:10:58.237681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.237987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:31.107 [2024-12-09 23:10:58.238026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.668 ms 00:29:31.107 [2024-12-09 23:10:58.238038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.238158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.238171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:31.107 [2024-12-09 23:10:58.238186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:31.107 [2024-12-09 23:10:58.238200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.238228] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:31.107 [2024-12-09 23:10:58.238255] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:31.107 [2024-12-09 23:10:58.238310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:31.107 [2024-12-09 23:10:58.238331] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:31.107 [2024-12-09 23:10:58.238426] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:31.107 [2024-12-09 23:10:58.238440] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:31.107 [2024-12-09 23:10:58.238501] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:31.107 [2024-12-09 23:10:58.238515] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:31.107 [2024-12-09 23:10:58.238530] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:31.107 [2024-12-09 23:10:58.238542] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:31.107 [2024-12-09 23:10:58.238555] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:31.107 [2024-12-09 23:10:58.238566] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:31.107 [2024-12-09 23:10:58.238582] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:31.107 [2024-12-09 23:10:58.238593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.238606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:31.107 [2024-12-09 23:10:58.238617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:29:31.107 [2024-12-09 23:10:58.238630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.238710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.107 [2024-12-09 23:10:58.238724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:31.107 [2024-12-09 23:10:58.238736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:31.107 [2024-12-09 23:10:58.238748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.107 [2024-12-09 23:10:58.238842] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:31.107 [2024-12-09 23:10:58.238857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:31.107 [2024-12-09 23:10:58.238868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:31.107 [2024-12-09 23:10:58.238881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.107 [2024-12-09 23:10:58.238891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:31.107 [2024-12-09 23:10:58.238906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:31.107 [2024-12-09 23:10:58.238917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:31.107 [2024-12-09 23:10:58.238932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:31.107 [2024-12-09 23:10:58.238943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:31.107 [2024-12-09 23:10:58.238955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:31.107 [2024-12-09 23:10:58.238965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:31.107 [2024-12-09 23:10:58.238978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:31.107 [2024-12-09 23:10:58.238988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:31.107 [2024-12-09 23:10:58.239000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:31.107 [2024-12-09 23:10:58.239009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:31.108 [2024-12-09 23:10:58.239021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:31.108 [2024-12-09 23:10:58.239042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:31.108 [2024-12-09 23:10:58.239062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:31.108 [2024-12-09 23:10:58.239084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.108 [2024-12-09 23:10:58.239105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:31.108 [2024-12-09 23:10:58.239146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.108 [2024-12-09 23:10:58.239168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:31.108 [2024-12-09 23:10:58.239178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.108 [2024-12-09 23:10:58.239199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:31.108 [2024-12-09 23:10:58.239213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:31.108 [2024-12-09 23:10:58.239235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:31.108 [2024-12-09 23:10:58.239245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:31.108 [2024-12-09 23:10:58.239266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:31.108 [2024-12-09 23:10:58.239277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:31.108 [2024-12-09 23:10:58.239286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:31.108 [2024-12-09 23:10:58.239298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:31.108 [2024-12-09 23:10:58.239309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:31.108 [2024-12-09 23:10:58.239324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:31.108 [2024-12-09 23:10:58.239345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:31.108 [2024-12-09 23:10:58.239354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239372] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:31.108 [2024-12-09 23:10:58.239385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:31.108 [2024-12-09 23:10:58.239398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:31.108 [2024-12-09 23:10:58.239408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:31.108 [2024-12-09 23:10:58.239421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:31.108 [2024-12-09 23:10:58.239431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:31.108 [2024-12-09 23:10:58.239443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:31.108 [2024-12-09 23:10:58.239465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:31.108 [2024-12-09 23:10:58.239477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:31.108 [2024-12-09 23:10:58.239487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:31.108 [2024-12-09 23:10:58.239501] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:31.108 [2024-12-09 23:10:58.239514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.108 [2024-12-09 23:10:58.239533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:31.108 [2024-12-09 23:10:58.239544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:31.108 [2024-12-09 23:10:58.239559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:31.108 [2024-12-09 23:10:58.239570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:31.108 [2024-12-09 23:10:58.239584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:31.108 [2024-12-09 23:10:58.239595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:31.108 [2024-12-09 23:10:58.239608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:31.108 [2024-12-09 23:10:58.239618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:31.108 [2024-12-09 23:10:58.239632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:31.108 [2024-12-09 23:10:58.239642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:31.108 [2024-12-09 23:10:58.239656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:31.108 [2024-12-09 23:10:58.239666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:31.108 [2024-12-09 23:10:58.239679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:31.108 [2024-12-09 23:10:58.239690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:31.108 [2024-12-09 23:10:58.239702] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:31.108 [2024-12-09 23:10:58.239715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:31.108 [2024-12-09 23:10:58.239733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:31.108 [2024-12-09 23:10:58.239743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:31.108 [2024-12-09 23:10:58.239757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:31.108 [2024-12-09 23:10:58.239768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:31.108 [2024-12-09 23:10:58.239784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.239795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:31.108 [2024-12-09 23:10:58.239809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:29:31.108 [2024-12-09 23:10:58.239822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.283535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.283610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.108 [2024-12-09 23:10:58.283630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.710 ms 00:29:31.108 [2024-12-09 23:10:58.283646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.283859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.283875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:31.108 [2024-12-09 23:10:58.283889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:29:31.108 [2024-12-09 23:10:58.283900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.332608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.332684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.108 [2024-12-09 23:10:58.332703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.753 ms 00:29:31.108 [2024-12-09 23:10:58.332714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.332839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.332853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.108 [2024-12-09 23:10:58.332867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:31.108 [2024-12-09 23:10:58.332878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.333339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.333357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.108 [2024-12-09 23:10:58.333371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:29:31.108 [2024-12-09 23:10:58.333381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.333552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.333568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.108 [2024-12-09 23:10:58.333582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:29:31.108 [2024-12-09 23:10:58.333594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.357785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.357862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.108 [2024-12-09 23:10:58.357883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.197 ms 00:29:31.108 [2024-12-09 23:10:58.357895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.394985] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:31.108 [2024-12-09 23:10:58.395073] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:31.108 [2024-12-09 23:10:58.395098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.395110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:31.108 [2024-12-09 23:10:58.395129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.088 ms 00:29:31.108 [2024-12-09 23:10:58.395154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.108 [2024-12-09 23:10:58.428062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.108 [2024-12-09 23:10:58.428164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:31.108 [2024-12-09 23:10:58.428203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.749 ms 00:29:31.108 [2024-12-09 23:10:58.428216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.450162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.450521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:31.368 [2024-12-09 23:10:58.450560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.788 ms 00:29:31.368 [2024-12-09 23:10:58.450571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.472138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.472211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:31.368 [2024-12-09 23:10:58.472234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.442 ms 00:29:31.368 [2024-12-09 23:10:58.472244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.473114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.473152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:31.368 [2024-12-09 23:10:58.473169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:29:31.368 [2024-12-09 23:10:58.473179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.571981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.572057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:31.368 [2024-12-09 23:10:58.572079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.921 ms 00:29:31.368 [2024-12-09 23:10:58.572090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.588716] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:31.368 [2024-12-09 23:10:58.615257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.615340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:31.368 [2024-12-09 23:10:58.615361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.086 ms 00:29:31.368 [2024-12-09 23:10:58.615376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.615545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.615566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:31.368 [2024-12-09 23:10:58.615578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:31.368 [2024-12-09 23:10:58.615592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.615658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.615673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:31.368 [2024-12-09 23:10:58.615684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:31.368 [2024-12-09 23:10:58.615701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.615727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.615741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:31.368 [2024-12-09 23:10:58.615753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:31.368 [2024-12-09 23:10:58.615766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.615805] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:31.368 [2024-12-09 23:10:58.615824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.615839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:31.368 [2024-12-09 23:10:58.615852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:31.368 [2024-12-09 23:10:58.615862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.657661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.658819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:31.368 [2024-12-09 23:10:58.658860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.827 ms 00:29:31.368 [2024-12-09 23:10:58.658873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.659071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.368 [2024-12-09 23:10:58.659088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:31.368 [2024-12-09 23:10:58.659103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:31.368 [2024-12-09 23:10:58.659118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.368 [2024-12-09 23:10:58.660199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:31.368 [2024-12-09 23:10:58.665960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 471.605 ms, result 0 00:29:31.368 [2024-12-09 23:10:58.667295] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:31.368 Some configs were skipped because the RPC state that can call them passed over. 00:29:31.630 23:10:58 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:29:31.630 [2024-12-09 23:10:58.916447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.630 [2024-12-09 23:10:58.916752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:31.630 [2024-12-09 23:10:58.916891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.643 ms 00:29:31.630 [2024-12-09 23:10:58.916942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.630 [2024-12-09 23:10:58.917028] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.227 ms, result 0 00:29:31.630 true 00:29:31.630 23:10:58 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:29:31.891 [2024-12-09 23:10:59.131963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.891 [2024-12-09 23:10:59.132040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:31.891 [2024-12-09 23:10:59.132060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:29:31.891 [2024-12-09 23:10:59.132071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.891 [2024-12-09 23:10:59.132117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.433 ms, result 0 00:29:31.891 true 00:29:31.891 23:10:59 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78955 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78955 ']' 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78955 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78955 00:29:31.891 killing process with pid 78955 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78955' 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78955 00:29:31.891 23:10:59 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78955 00:29:33.268 [2024-12-09 23:11:00.350001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.350089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:33.268 [2024-12-09 23:11:00.350106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:33.268 [2024-12-09 23:11:00.350120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.350150] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:33.268 [2024-12-09 23:11:00.354613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.354666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:33.268 [2024-12-09 23:11:00.354690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.441 ms 00:29:33.268 [2024-12-09 23:11:00.354701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.355025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.355041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:33.268 [2024-12-09 23:11:00.355056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:29:33.268 [2024-12-09 23:11:00.355068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.358366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.358416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:33.268 [2024-12-09 23:11:00.358436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:29:33.268 [2024-12-09 23:11:00.358447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.364129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.364183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:33.268 [2024-12-09 23:11:00.364204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.608 ms 00:29:33.268 [2024-12-09 23:11:00.364215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.380421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.380533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:33.268 [2024-12-09 23:11:00.380558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.125 ms 00:29:33.268 [2024-12-09 23:11:00.380570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.391641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.391959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:33.268 [2024-12-09 23:11:00.391996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.960 ms 00:29:33.268 [2024-12-09 23:11:00.392009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.392229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.392246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:33.268 [2024-12-09 23:11:00.392260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:33.268 [2024-12-09 23:11:00.392272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.409227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.409316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:33.268 [2024-12-09 23:11:00.409337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.941 ms 00:29:33.268 [2024-12-09 23:11:00.409348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.425676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.425752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:33.268 [2024-12-09 23:11:00.425783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.251 ms 00:29:33.268 [2024-12-09 23:11:00.425793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.441581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.441899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:33.268 [2024-12-09 23:11:00.441937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.726 ms 00:29:33.268 [2024-12-09 23:11:00.441948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.268 [2024-12-09 23:11:00.457921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.268 [2024-12-09 23:11:00.458008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:33.268 [2024-12-09 23:11:00.458029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.855 ms 00:29:33.269 [2024-12-09 23:11:00.458040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.269 [2024-12-09 23:11:00.458137] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:33.269 [2024-12-09 23:11:00.458161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.458995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:33.269 [2024-12-09 23:11:00.459151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:33.270 [2024-12-09 23:11:00.459535] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:33.270 [2024-12-09 23:11:00.459556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:29:33.270 [2024-12-09 23:11:00.459571] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:33.270 [2024-12-09 23:11:00.459584] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:33.270 [2024-12-09 23:11:00.459594] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:33.270 [2024-12-09 23:11:00.459607] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:33.270 [2024-12-09 23:11:00.459617] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:33.270 [2024-12-09 23:11:00.459631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:33.270 [2024-12-09 23:11:00.459641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:33.270 [2024-12-09 23:11:00.459652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:33.270 [2024-12-09 23:11:00.459662] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:33.270 [2024-12-09 23:11:00.459676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.270 [2024-12-09 23:11:00.459687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:33.270 [2024-12-09 23:11:00.459709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:29:33.270 [2024-12-09 23:11:00.459720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.270 [2024-12-09 23:11:00.482114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.270 [2024-12-09 23:11:00.482205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:33.270 [2024-12-09 23:11:00.482245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.381 ms 00:29:33.270 [2024-12-09 23:11:00.482257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.270 [2024-12-09 23:11:00.482901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.270 [2024-12-09 23:11:00.482917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:33.270 [2024-12-09 23:11:00.482936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:29:33.270 [2024-12-09 23:11:00.482946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.270 [2024-12-09 23:11:00.555753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.270 [2024-12-09 23:11:00.556039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:33.270 [2024-12-09 23:11:00.556075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.270 [2024-12-09 23:11:00.556086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.270 [2024-12-09 23:11:00.556253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.270 [2024-12-09 23:11:00.556266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:33.270 [2024-12-09 23:11:00.556284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.270 [2024-12-09 23:11:00.556294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.270 [2024-12-09 23:11:00.556367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.270 [2024-12-09 23:11:00.556382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:33.270 [2024-12-09 23:11:00.556398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.270 [2024-12-09 23:11:00.556408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.270 [2024-12-09 23:11:00.556431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.270 [2024-12-09 23:11:00.556442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:33.270 [2024-12-09 23:11:00.556475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.270 [2024-12-09 23:11:00.556489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.689037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.689122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:33.529 [2024-12-09 23:11:00.689143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.689155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.792678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.792764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:33.529 [2024-12-09 23:11:00.792783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.792798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.792931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.792944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:33.529 [2024-12-09 23:11:00.792963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.792974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.793007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.793019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:33.529 [2024-12-09 23:11:00.793033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.793044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.793172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.793185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:33.529 [2024-12-09 23:11:00.793199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.793210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.793255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.793268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:33.529 [2024-12-09 23:11:00.793281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.793292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.793337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.793348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:33.529 [2024-12-09 23:11:00.793364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.793375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.793424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:33.529 [2024-12-09 23:11:00.793436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:33.529 [2024-12-09 23:11:00.793482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:33.529 [2024-12-09 23:11:00.793493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.529 [2024-12-09 23:11:00.793659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 444.336 ms, result 0 00:29:34.905 23:11:01 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:34.905 23:11:01 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:34.905 [2024-12-09 23:11:01.976111] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:29:34.905 [2024-12-09 23:11:01.976271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79019 ] 00:29:34.905 [2024-12-09 23:11:02.158658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.164 [2024-12-09 23:11:02.297579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.422 [2024-12-09 23:11:02.694938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.422 [2024-12-09 23:11:02.695041] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:35.682 [2024-12-09 23:11:02.859032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.859110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:35.682 [2024-12-09 23:11:02.859127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:35.682 [2024-12-09 23:11:02.859138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.862521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.862573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:35.682 [2024-12-09 23:11:02.862587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:29:35.682 [2024-12-09 23:11:02.862598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.862740] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:35.682 [2024-12-09 23:11:02.863750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:35.682 [2024-12-09 23:11:02.863778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.863790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:35.682 [2024-12-09 23:11:02.863801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:29:35.682 [2024-12-09 23:11:02.863812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.866334] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:35.682 [2024-12-09 23:11:02.887801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.887897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:35.682 [2024-12-09 23:11:02.887915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.499 ms 00:29:35.682 [2024-12-09 23:11:02.887928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.888119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.888135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:35.682 [2024-12-09 23:11:02.888148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:35.682 [2024-12-09 23:11:02.888158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.896940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.897003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:35.682 [2024-12-09 23:11:02.897018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.743 ms 00:29:35.682 [2024-12-09 23:11:02.897030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.897174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.897190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:35.682 [2024-12-09 23:11:02.897202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:35.682 [2024-12-09 23:11:02.897217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.897251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.897263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:35.682 [2024-12-09 23:11:02.897274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:35.682 [2024-12-09 23:11:02.897284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.897311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:35.682 [2024-12-09 23:11:02.902218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.902260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:35.682 [2024-12-09 23:11:02.902275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.921 ms 00:29:35.682 [2024-12-09 23:11:02.902285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.902392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.902405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:35.682 [2024-12-09 23:11:02.902417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:35.682 [2024-12-09 23:11:02.902432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.902485] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:35.682 [2024-12-09 23:11:02.902511] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:35.682 [2024-12-09 23:11:02.902549] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:35.682 [2024-12-09 23:11:02.902568] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:35.682 [2024-12-09 23:11:02.902659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:35.682 [2024-12-09 23:11:02.902672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:35.682 [2024-12-09 23:11:02.902689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:35.682 [2024-12-09 23:11:02.902702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:35.682 [2024-12-09 23:11:02.902715] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:35.682 [2024-12-09 23:11:02.902726] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:35.682 [2024-12-09 23:11:02.902737] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:35.682 [2024-12-09 23:11:02.902747] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:35.682 [2024-12-09 23:11:02.902757] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:35.682 [2024-12-09 23:11:02.902769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.902779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:35.682 [2024-12-09 23:11:02.902790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:29:35.682 [2024-12-09 23:11:02.902800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.902884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.682 [2024-12-09 23:11:02.902903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:35.682 [2024-12-09 23:11:02.902921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:35.682 [2024-12-09 23:11:02.902937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.682 [2024-12-09 23:11:02.903030] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:35.682 [2024-12-09 23:11:02.903043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:35.682 [2024-12-09 23:11:02.903055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.682 [2024-12-09 23:11:02.903069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:35.682 [2024-12-09 23:11:02.903103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:35.682 [2024-12-09 23:11:02.903125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:35.682 [2024-12-09 23:11:02.903135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.682 [2024-12-09 23:11:02.903153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:35.682 [2024-12-09 23:11:02.903181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:35.682 [2024-12-09 23:11:02.903191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:35.682 [2024-12-09 23:11:02.903200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:35.682 [2024-12-09 23:11:02.903210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:35.682 [2024-12-09 23:11:02.903220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:35.682 [2024-12-09 23:11:02.903239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:35.682 [2024-12-09 23:11:02.903248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:35.682 [2024-12-09 23:11:02.903267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.682 [2024-12-09 23:11:02.903286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:35.682 [2024-12-09 23:11:02.903295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.682 [2024-12-09 23:11:02.903314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:35.682 [2024-12-09 23:11:02.903323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.682 [2024-12-09 23:11:02.903341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:35.682 [2024-12-09 23:11:02.903350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:35.682 [2024-12-09 23:11:02.903369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:35.682 [2024-12-09 23:11:02.903385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.682 [2024-12-09 23:11:02.903418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:35.682 [2024-12-09 23:11:02.903430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:35.682 [2024-12-09 23:11:02.903440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:35.682 [2024-12-09 23:11:02.903462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:35.682 [2024-12-09 23:11:02.903473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:35.682 [2024-12-09 23:11:02.903483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.682 [2024-12-09 23:11:02.903492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:35.683 [2024-12-09 23:11:02.903501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:35.683 [2024-12-09 23:11:02.903511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.683 [2024-12-09 23:11:02.903521] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:35.683 [2024-12-09 23:11:02.903536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:35.683 [2024-12-09 23:11:02.903547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:35.683 [2024-12-09 23:11:02.903556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:35.683 [2024-12-09 23:11:02.903567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:35.683 [2024-12-09 23:11:02.903577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:35.683 [2024-12-09 23:11:02.903586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:35.683 [2024-12-09 23:11:02.903596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:35.683 [2024-12-09 23:11:02.903605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:35.683 [2024-12-09 23:11:02.903614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:35.683 [2024-12-09 23:11:02.903625] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:35.683 [2024-12-09 23:11:02.903647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.683 [2024-12-09 23:11:02.903666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:35.683 [2024-12-09 23:11:02.903679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:35.683 [2024-12-09 23:11:02.903690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:35.683 [2024-12-09 23:11:02.903703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:35.683 [2024-12-09 23:11:02.903714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:35.683 [2024-12-09 23:11:02.903724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:35.683 [2024-12-09 23:11:02.903735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:35.683 [2024-12-09 23:11:02.903745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:35.683 [2024-12-09 23:11:02.903756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:35.683 [2024-12-09 23:11:02.903770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:35.683 [2024-12-09 23:11:02.903788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:35.683 [2024-12-09 23:11:02.903799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:35.683 [2024-12-09 23:11:02.903810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:35.683 [2024-12-09 23:11:02.903821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:35.683 [2024-12-09 23:11:02.903831] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:35.683 [2024-12-09 23:11:02.903843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:35.683 [2024-12-09 23:11:02.903860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:35.683 [2024-12-09 23:11:02.903871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:35.683 [2024-12-09 23:11:02.903883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:35.683 [2024-12-09 23:11:02.903897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:35.683 [2024-12-09 23:11:02.903914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.683 [2024-12-09 23:11:02.903933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:35.683 [2024-12-09 23:11:02.903950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:29:35.683 [2024-12-09 23:11:02.903964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.683 [2024-12-09 23:11:02.949306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.683 [2024-12-09 23:11:02.949662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:35.683 [2024-12-09 23:11:02.949694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.345 ms 00:29:35.683 [2024-12-09 23:11:02.949714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.683 [2024-12-09 23:11:02.949916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.683 [2024-12-09 23:11:02.949931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:35.683 [2024-12-09 23:11:02.949942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:35.683 [2024-12-09 23:11:02.949953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.683 [2024-12-09 23:11:03.007261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.683 [2024-12-09 23:11:03.007337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:35.683 [2024-12-09 23:11:03.007354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.374 ms 00:29:35.683 [2024-12-09 23:11:03.007366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.683 [2024-12-09 23:11:03.007515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.683 [2024-12-09 23:11:03.007531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:35.683 [2024-12-09 23:11:03.007543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:35.683 [2024-12-09 23:11:03.007554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.683 [2024-12-09 23:11:03.008012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.683 [2024-12-09 23:11:03.008027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:35.683 [2024-12-09 23:11:03.008046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:29:35.683 [2024-12-09 23:11:03.008056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.683 [2024-12-09 23:11:03.008211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.683 [2024-12-09 23:11:03.008233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:35.683 [2024-12-09 23:11:03.008244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:29:35.683 [2024-12-09 23:11:03.008255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.029958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.942 [2024-12-09 23:11:03.030029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:35.942 [2024-12-09 23:11:03.030046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.711 ms 00:29:35.942 [2024-12-09 23:11:03.030058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.051941] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:35.942 [2024-12-09 23:11:03.052023] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:35.942 [2024-12-09 23:11:03.052042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.942 [2024-12-09 23:11:03.052054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:35.942 [2024-12-09 23:11:03.052068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.845 ms 00:29:35.942 [2024-12-09 23:11:03.052078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.085367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.942 [2024-12-09 23:11:03.085476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:35.942 [2024-12-09 23:11:03.085496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.161 ms 00:29:35.942 [2024-12-09 23:11:03.085507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.106527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.942 [2024-12-09 23:11:03.106607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:35.942 [2024-12-09 23:11:03.106624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.866 ms 00:29:35.942 [2024-12-09 23:11:03.106634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.127159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.942 [2024-12-09 23:11:03.127237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:35.942 [2024-12-09 23:11:03.127254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.413 ms 00:29:35.942 [2024-12-09 23:11:03.127265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.128193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.942 [2024-12-09 23:11:03.128238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:35.942 [2024-12-09 23:11:03.128252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:29:35.942 [2024-12-09 23:11:03.128262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.224852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.942 [2024-12-09 23:11:03.224942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:35.942 [2024-12-09 23:11:03.224961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.699 ms 00:29:35.942 [2024-12-09 23:11:03.224973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.942 [2024-12-09 23:11:03.238975] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:35.942 [2024-12-09 23:11:03.263656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.943 [2024-12-09 23:11:03.263726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:35.943 [2024-12-09 23:11:03.263750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.578 ms 00:29:35.943 [2024-12-09 23:11:03.263762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.943 [2024-12-09 23:11:03.263919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.943 [2024-12-09 23:11:03.263934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:35.943 [2024-12-09 23:11:03.263946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:35.943 [2024-12-09 23:11:03.263956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.943 [2024-12-09 23:11:03.264013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.943 [2024-12-09 23:11:03.264025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:35.943 [2024-12-09 23:11:03.264041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:35.943 [2024-12-09 23:11:03.264054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.943 [2024-12-09 23:11:03.264087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.943 [2024-12-09 23:11:03.264101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:35.943 [2024-12-09 23:11:03.264113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:35.943 [2024-12-09 23:11:03.264123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:35.943 [2024-12-09 23:11:03.264163] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:35.943 [2024-12-09 23:11:03.264176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:35.943 [2024-12-09 23:11:03.264186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:35.943 [2024-12-09 23:11:03.264196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:35.943 [2024-12-09 23:11:03.264206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.202 [2024-12-09 23:11:03.305674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.202 [2024-12-09 23:11:03.305983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:36.202 [2024-12-09 23:11:03.306016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.505 ms 00:29:36.202 [2024-12-09 23:11:03.306028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.202 [2024-12-09 23:11:03.306211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:36.202 [2024-12-09 23:11:03.306227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:36.202 [2024-12-09 23:11:03.306239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:36.202 [2024-12-09 23:11:03.306257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:36.202 [2024-12-09 23:11:03.307591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:36.202 [2024-12-09 23:11:03.313505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 448.940 ms, result 0 00:29:36.202 [2024-12-09 23:11:03.314549] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:36.202 [2024-12-09 23:11:03.334872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:37.138  [2024-12-09T23:11:05.409Z] Copying: 28/256 [MB] (28 MBps) [2024-12-09T23:11:06.361Z] Copying: 53/256 [MB] (25 MBps) [2024-12-09T23:11:07.736Z] Copying: 78/256 [MB] (24 MBps) [2024-12-09T23:11:08.670Z] Copying: 104/256 [MB] (25 MBps) [2024-12-09T23:11:09.606Z] Copying: 128/256 [MB] (24 MBps) [2024-12-09T23:11:10.545Z] Copying: 153/256 [MB] (24 MBps) [2024-12-09T23:11:11.479Z] Copying: 178/256 [MB] (25 MBps) [2024-12-09T23:11:12.418Z] Copying: 202/256 [MB] (23 MBps) [2024-12-09T23:11:13.355Z] Copying: 226/256 [MB] (24 MBps) [2024-12-09T23:11:13.614Z] Copying: 250/256 [MB] (24 MBps) [2024-12-09T23:11:13.614Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-09 23:11:13.551857] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:46.278 [2024-12-09 23:11:13.568307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.278 [2024-12-09 23:11:13.568382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:46.278 [2024-12-09 23:11:13.568411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:46.278 [2024-12-09 23:11:13.568424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.278 [2024-12-09 23:11:13.568479] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:46.278 [2024-12-09 23:11:13.572662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.278 [2024-12-09 23:11:13.572703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:46.278 [2024-12-09 23:11:13.572716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.171 ms 00:29:46.278 [2024-12-09 23:11:13.572727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.278 [2024-12-09 23:11:13.572978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.278 [2024-12-09 23:11:13.572992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:46.278 [2024-12-09 23:11:13.573005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:29:46.278 [2024-12-09 23:11:13.573016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.278 [2024-12-09 23:11:13.575889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.278 [2024-12-09 23:11:13.575917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:46.278 [2024-12-09 23:11:13.575930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.855 ms 00:29:46.278 [2024-12-09 23:11:13.575940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.278 [2024-12-09 23:11:13.581574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.278 [2024-12-09 23:11:13.581786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:46.278 [2024-12-09 23:11:13.581814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.620 ms 00:29:46.278 [2024-12-09 23:11:13.581825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.580 [2024-12-09 23:11:13.624298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.580 [2024-12-09 23:11:13.624384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:46.580 [2024-12-09 23:11:13.624403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.433 ms 00:29:46.580 [2024-12-09 23:11:13.624414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.580 [2024-12-09 23:11:13.649677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.580 [2024-12-09 23:11:13.649778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:46.580 [2024-12-09 23:11:13.649797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.175 ms 00:29:46.580 [2024-12-09 23:11:13.649808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.580 [2024-12-09 23:11:13.650080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.580 [2024-12-09 23:11:13.650096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:46.580 [2024-12-09 23:11:13.650121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:29:46.580 [2024-12-09 23:11:13.650132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.580 [2024-12-09 23:11:13.693012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.580 [2024-12-09 23:11:13.693095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:46.580 [2024-12-09 23:11:13.693113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.926 ms 00:29:46.580 [2024-12-09 23:11:13.693124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.580 [2024-12-09 23:11:13.734858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.580 [2024-12-09 23:11:13.734937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:46.580 [2024-12-09 23:11:13.734954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.687 ms 00:29:46.581 [2024-12-09 23:11:13.734965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.581 [2024-12-09 23:11:13.776218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.581 [2024-12-09 23:11:13.776293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:46.581 [2024-12-09 23:11:13.776310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.209 ms 00:29:46.581 [2024-12-09 23:11:13.776321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.581 [2024-12-09 23:11:13.817615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.581 [2024-12-09 23:11:13.817926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:46.581 [2024-12-09 23:11:13.817953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.186 ms 00:29:46.581 [2024-12-09 23:11:13.817965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.581 [2024-12-09 23:11:13.818089] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:46.581 [2024-12-09 23:11:13.818111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.818998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.819008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.819018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.819028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.819039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:46.581 [2024-12-09 23:11:13.819049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:46.582 [2024-12-09 23:11:13.819283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:46.582 [2024-12-09 23:11:13.819298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:29:46.582 [2024-12-09 23:11:13.819310] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:46.582 [2024-12-09 23:11:13.819321] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:46.582 [2024-12-09 23:11:13.819331] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:46.582 [2024-12-09 23:11:13.819342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:46.582 [2024-12-09 23:11:13.819352] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:46.582 [2024-12-09 23:11:13.819368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:46.582 [2024-12-09 23:11:13.819379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:46.582 [2024-12-09 23:11:13.819388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:46.582 [2024-12-09 23:11:13.819398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:46.582 [2024-12-09 23:11:13.819409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.582 [2024-12-09 23:11:13.819419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:46.582 [2024-12-09 23:11:13.819430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:29:46.582 [2024-12-09 23:11:13.819440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.582 [2024-12-09 23:11:13.840436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.582 [2024-12-09 23:11:13.840739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:46.582 [2024-12-09 23:11:13.840768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.989 ms 00:29:46.582 [2024-12-09 23:11:13.840788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.582 [2024-12-09 23:11:13.841447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.582 [2024-12-09 23:11:13.841476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:46.582 [2024-12-09 23:11:13.841489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:29:46.582 [2024-12-09 23:11:13.841499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:13.898471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:13.898565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.874 [2024-12-09 23:11:13.898612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:13.898624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:13.898774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:13.898788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.874 [2024-12-09 23:11:13.898800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:13.898810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:13.898875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:13.898889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.874 [2024-12-09 23:11:13.898901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:13.898920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:13.898940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:13.898951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.874 [2024-12-09 23:11:13.898962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:13.898973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.029128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.029391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.874 [2024-12-09 23:11:14.029419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.029441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.138196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.138271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.874 [2024-12-09 23:11:14.138288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.138300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.138430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.138443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.874 [2024-12-09 23:11:14.138488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.138499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.138540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.138552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.874 [2024-12-09 23:11:14.138564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.138575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.138708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.138723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.874 [2024-12-09 23:11:14.138734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.138745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.138795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.138817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:46.874 [2024-12-09 23:11:14.138827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.138837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.138881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.138892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.874 [2024-12-09 23:11:14.138903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.138913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.138965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.874 [2024-12-09 23:11:14.138978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.874 [2024-12-09 23:11:14.138988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.874 [2024-12-09 23:11:14.138998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.874 [2024-12-09 23:11:14.139151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 571.775 ms, result 0 00:29:48.255 00:29:48.255 00:29:48.255 23:11:15 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:29:48.255 23:11:15 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:29:48.514 23:11:15 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:48.514 [2024-12-09 23:11:15.795604] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:29:48.514 [2024-12-09 23:11:15.795755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79163 ] 00:29:48.772 [2024-12-09 23:11:15.975690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.032 [2024-12-09 23:11:16.108674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.290 [2024-12-09 23:11:16.496053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.290 [2024-12-09 23:11:16.496141] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:49.550 [2024-12-09 23:11:16.660550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.660872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:49.550 [2024-12-09 23:11:16.660902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:49.550 [2024-12-09 23:11:16.660915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.664447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.664507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:49.550 [2024-12-09 23:11:16.664522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.501 ms 00:29:49.550 [2024-12-09 23:11:16.664550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.664688] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:49.550 [2024-12-09 23:11:16.665734] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:49.550 [2024-12-09 23:11:16.665769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.665782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:49.550 [2024-12-09 23:11:16.665794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.094 ms 00:29:49.550 [2024-12-09 23:11:16.665805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.667719] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:49.550 [2024-12-09 23:11:16.687509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.687579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:49.550 [2024-12-09 23:11:16.687596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.819 ms 00:29:49.550 [2024-12-09 23:11:16.687607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.687783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.687800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:49.550 [2024-12-09 23:11:16.687812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:49.550 [2024-12-09 23:11:16.687822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.696525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.696579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:49.550 [2024-12-09 23:11:16.696592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.666 ms 00:29:49.550 [2024-12-09 23:11:16.696603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.696753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.696769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:49.550 [2024-12-09 23:11:16.696780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:29:49.550 [2024-12-09 23:11:16.696791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.696829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.696841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:49.550 [2024-12-09 23:11:16.696852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:49.550 [2024-12-09 23:11:16.696862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.696888] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:49.550 [2024-12-09 23:11:16.701691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.701729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:49.550 [2024-12-09 23:11:16.701743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.819 ms 00:29:49.550 [2024-12-09 23:11:16.701753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.701848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.701861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:49.550 [2024-12-09 23:11:16.701873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:49.550 [2024-12-09 23:11:16.701884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.701914] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:49.550 [2024-12-09 23:11:16.701938] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:49.550 [2024-12-09 23:11:16.701975] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:49.550 [2024-12-09 23:11:16.701994] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:49.550 [2024-12-09 23:11:16.702085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:49.550 [2024-12-09 23:11:16.702099] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:49.550 [2024-12-09 23:11:16.702113] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:49.550 [2024-12-09 23:11:16.702129] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:49.550 [2024-12-09 23:11:16.702142] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:49.550 [2024-12-09 23:11:16.702154] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:49.550 [2024-12-09 23:11:16.702174] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:49.550 [2024-12-09 23:11:16.702184] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:49.550 [2024-12-09 23:11:16.702194] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:49.550 [2024-12-09 23:11:16.702205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.702215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:49.550 [2024-12-09 23:11:16.702226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:29:49.550 [2024-12-09 23:11:16.702236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.702313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.550 [2024-12-09 23:11:16.702329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:49.550 [2024-12-09 23:11:16.702339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:49.550 [2024-12-09 23:11:16.702349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.550 [2024-12-09 23:11:16.702439] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:49.550 [2024-12-09 23:11:16.702643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:49.550 [2024-12-09 23:11:16.702694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.550 [2024-12-09 23:11:16.702726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.550 [2024-12-09 23:11:16.702757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:49.550 [2024-12-09 23:11:16.702787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:49.550 [2024-12-09 23:11:16.702817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:49.550 [2024-12-09 23:11:16.702848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:49.550 [2024-12-09 23:11:16.702878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:49.550 [2024-12-09 23:11:16.702967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.550 [2024-12-09 23:11:16.703003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:49.550 [2024-12-09 23:11:16.703047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:49.550 [2024-12-09 23:11:16.703077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:49.550 [2024-12-09 23:11:16.703106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:49.550 [2024-12-09 23:11:16.703136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:49.550 [2024-12-09 23:11:16.703213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.550 [2024-12-09 23:11:16.703248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:49.550 [2024-12-09 23:11:16.703277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:49.550 [2024-12-09 23:11:16.703307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.550 [2024-12-09 23:11:16.703336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:49.550 [2024-12-09 23:11:16.703365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:49.550 [2024-12-09 23:11:16.703394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.550 [2024-12-09 23:11:16.703549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:49.550 [2024-12-09 23:11:16.703580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:49.550 [2024-12-09 23:11:16.703609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.550 [2024-12-09 23:11:16.703638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:49.550 [2024-12-09 23:11:16.703650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:49.550 [2024-12-09 23:11:16.703659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.550 [2024-12-09 23:11:16.703668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:49.550 [2024-12-09 23:11:16.703678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:49.550 [2024-12-09 23:11:16.703687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:49.550 [2024-12-09 23:11:16.703696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:49.550 [2024-12-09 23:11:16.703706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:49.550 [2024-12-09 23:11:16.703715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.551 [2024-12-09 23:11:16.703725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:49.551 [2024-12-09 23:11:16.703734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:49.551 [2024-12-09 23:11:16.703743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:49.551 [2024-12-09 23:11:16.703752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:49.551 [2024-12-09 23:11:16.703762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:49.551 [2024-12-09 23:11:16.703771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.551 [2024-12-09 23:11:16.703780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:49.551 [2024-12-09 23:11:16.703788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:49.551 [2024-12-09 23:11:16.703797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.551 [2024-12-09 23:11:16.703806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:49.551 [2024-12-09 23:11:16.703817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:49.551 [2024-12-09 23:11:16.703833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:49.551 [2024-12-09 23:11:16.703843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:49.551 [2024-12-09 23:11:16.703854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:49.551 [2024-12-09 23:11:16.703865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:49.551 [2024-12-09 23:11:16.703875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:49.551 [2024-12-09 23:11:16.703885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:49.551 [2024-12-09 23:11:16.703895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:49.551 [2024-12-09 23:11:16.703905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:49.551 [2024-12-09 23:11:16.703917] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:49.551 [2024-12-09 23:11:16.703931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.551 [2024-12-09 23:11:16.703943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:49.551 [2024-12-09 23:11:16.703955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:49.551 [2024-12-09 23:11:16.703965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:49.551 [2024-12-09 23:11:16.703976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:49.551 [2024-12-09 23:11:16.703987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:49.551 [2024-12-09 23:11:16.703998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:49.551 [2024-12-09 23:11:16.704009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:49.551 [2024-12-09 23:11:16.704020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:49.551 [2024-12-09 23:11:16.704030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:49.551 [2024-12-09 23:11:16.704041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:49.551 [2024-12-09 23:11:16.704052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:49.551 [2024-12-09 23:11:16.704062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:49.551 [2024-12-09 23:11:16.704072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:49.551 [2024-12-09 23:11:16.704083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:49.551 [2024-12-09 23:11:16.704093] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:49.551 [2024-12-09 23:11:16.704105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:49.551 [2024-12-09 23:11:16.704116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:49.551 [2024-12-09 23:11:16.704127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:49.551 [2024-12-09 23:11:16.704137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:49.551 [2024-12-09 23:11:16.704147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:49.551 [2024-12-09 23:11:16.704160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.704175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:49.551 [2024-12-09 23:11:16.704185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.778 ms 00:29:49.551 [2024-12-09 23:11:16.704196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.750092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.750163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:49.551 [2024-12-09 23:11:16.750180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.893 ms 00:29:49.551 [2024-12-09 23:11:16.750207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.750398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.750413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:49.551 [2024-12-09 23:11:16.750425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:49.551 [2024-12-09 23:11:16.750436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.816315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.816385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:49.551 [2024-12-09 23:11:16.816408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.927 ms 00:29:49.551 [2024-12-09 23:11:16.816434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.816622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.816637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:49.551 [2024-12-09 23:11:16.816650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:49.551 [2024-12-09 23:11:16.816660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.817115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.817138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:49.551 [2024-12-09 23:11:16.817159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:29:49.551 [2024-12-09 23:11:16.817169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.817297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.817313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:49.551 [2024-12-09 23:11:16.817324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:49.551 [2024-12-09 23:11:16.817335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.839927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.840240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:49.551 [2024-12-09 23:11:16.840271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.603 ms 00:29:49.551 [2024-12-09 23:11:16.840282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.551 [2024-12-09 23:11:16.861143] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:29:49.551 [2024-12-09 23:11:16.861219] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:49.551 [2024-12-09 23:11:16.861254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.551 [2024-12-09 23:11:16.861266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:49.551 [2024-12-09 23:11:16.861281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.791 ms 00:29:49.551 [2024-12-09 23:11:16.861292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.810 [2024-12-09 23:11:16.892344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.810 [2024-12-09 23:11:16.892424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:49.810 [2024-12-09 23:11:16.892441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.947 ms 00:29:49.810 [2024-12-09 23:11:16.892484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.810 [2024-12-09 23:11:16.912520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.810 [2024-12-09 23:11:16.912594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:49.810 [2024-12-09 23:11:16.912610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.899 ms 00:29:49.810 [2024-12-09 23:11:16.912621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.810 [2024-12-09 23:11:16.932526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.810 [2024-12-09 23:11:16.932602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:49.810 [2024-12-09 23:11:16.932619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.791 ms 00:29:49.810 [2024-12-09 23:11:16.932629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.810 [2024-12-09 23:11:16.933488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.810 [2024-12-09 23:11:16.933515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:49.810 [2024-12-09 23:11:16.933528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:29:49.810 [2024-12-09 23:11:16.933539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.810 [2024-12-09 23:11:17.029631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.810 [2024-12-09 23:11:17.029970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:49.811 [2024-12-09 23:11:17.029998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.210 ms 00:29:49.811 [2024-12-09 23:11:17.030012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.044890] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:49.811 [2024-12-09 23:11:17.069934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.811 [2024-12-09 23:11:17.069993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:49.811 [2024-12-09 23:11:17.070009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.786 ms 00:29:49.811 [2024-12-09 23:11:17.070026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.070164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.811 [2024-12-09 23:11:17.070178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:49.811 [2024-12-09 23:11:17.070189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:49.811 [2024-12-09 23:11:17.070200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.070264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.811 [2024-12-09 23:11:17.070276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:49.811 [2024-12-09 23:11:17.070287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:29:49.811 [2024-12-09 23:11:17.070302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.070343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.811 [2024-12-09 23:11:17.070357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:49.811 [2024-12-09 23:11:17.070368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:49.811 [2024-12-09 23:11:17.070378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.070417] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:49.811 [2024-12-09 23:11:17.070429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.811 [2024-12-09 23:11:17.070440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:49.811 [2024-12-09 23:11:17.070471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:49.811 [2024-12-09 23:11:17.070482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.112877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.811 [2024-12-09 23:11:17.112960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:49.811 [2024-12-09 23:11:17.112979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.432 ms 00:29:49.811 [2024-12-09 23:11:17.113008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.113207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:49.811 [2024-12-09 23:11:17.113224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:49.811 [2024-12-09 23:11:17.113237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:49.811 [2024-12-09 23:11:17.113249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:49.811 [2024-12-09 23:11:17.114279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:49.811 [2024-12-09 23:11:17.120401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 454.163 ms, result 0 00:29:49.811 [2024-12-09 23:11:17.121506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:49.811 [2024-12-09 23:11:17.142069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:50.069  [2024-12-09T23:11:17.405Z] Copying: 4096/4096 [kB] (average 24 MBps)[2024-12-09 23:11:17.312751] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:50.069 [2024-12-09 23:11:17.328705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.069 [2024-12-09 23:11:17.328785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:50.069 [2024-12-09 23:11:17.328811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:50.069 [2024-12-09 23:11:17.328823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.069 [2024-12-09 23:11:17.328855] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:50.069 [2024-12-09 23:11:17.332965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.069 [2024-12-09 23:11:17.333001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:50.069 [2024-12-09 23:11:17.333015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.097 ms 00:29:50.069 [2024-12-09 23:11:17.333026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.069 [2024-12-09 23:11:17.335336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.069 [2024-12-09 23:11:17.335387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:50.070 [2024-12-09 23:11:17.335402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:29:50.070 [2024-12-09 23:11:17.335429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.070 [2024-12-09 23:11:17.338665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.070 [2024-12-09 23:11:17.338721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:50.070 [2024-12-09 23:11:17.338735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:29:50.070 [2024-12-09 23:11:17.338746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.070 [2024-12-09 23:11:17.344412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.070 [2024-12-09 23:11:17.344466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:50.070 [2024-12-09 23:11:17.344481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.637 ms 00:29:50.070 [2024-12-09 23:11:17.344492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.070 [2024-12-09 23:11:17.385657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.070 [2024-12-09 23:11:17.385970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:50.070 [2024-12-09 23:11:17.385999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.166 ms 00:29:50.070 [2024-12-09 23:11:17.386012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.330 [2024-12-09 23:11:17.409716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.330 [2024-12-09 23:11:17.409814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:50.330 [2024-12-09 23:11:17.409833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.545 ms 00:29:50.330 [2024-12-09 23:11:17.409845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.330 [2024-12-09 23:11:17.410039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.330 [2024-12-09 23:11:17.410055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:50.330 [2024-12-09 23:11:17.410080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:50.330 [2024-12-09 23:11:17.410091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.330 [2024-12-09 23:11:17.452477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.330 [2024-12-09 23:11:17.452758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:50.330 [2024-12-09 23:11:17.452786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.430 ms 00:29:50.330 [2024-12-09 23:11:17.452797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.330 [2024-12-09 23:11:17.494889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.330 [2024-12-09 23:11:17.494974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:50.330 [2024-12-09 23:11:17.494991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.952 ms 00:29:50.330 [2024-12-09 23:11:17.495002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.330 [2024-12-09 23:11:17.536524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.330 [2024-12-09 23:11:17.536602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:50.330 [2024-12-09 23:11:17.536620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.472 ms 00:29:50.330 [2024-12-09 23:11:17.536631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.330 [2024-12-09 23:11:17.577001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.330 [2024-12-09 23:11:17.577288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:50.330 [2024-12-09 23:11:17.577316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.285 ms 00:29:50.330 [2024-12-09 23:11:17.577328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.330 [2024-12-09 23:11:17.577520] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:50.330 [2024-12-09 23:11:17.577543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:50.330 [2024-12-09 23:11:17.577771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.577995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:50.331 [2024-12-09 23:11:17.578690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:50.331 [2024-12-09 23:11:17.578700] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:29:50.331 [2024-12-09 23:11:17.578712] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:50.331 [2024-12-09 23:11:17.578723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:50.331 [2024-12-09 23:11:17.578733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:50.331 [2024-12-09 23:11:17.578744] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:50.331 [2024-12-09 23:11:17.578755] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:50.331 [2024-12-09 23:11:17.578766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:50.331 [2024-12-09 23:11:17.578782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:50.331 [2024-12-09 23:11:17.578792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:50.331 [2024-12-09 23:11:17.578801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:50.331 [2024-12-09 23:11:17.578811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.331 [2024-12-09 23:11:17.578822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:50.331 [2024-12-09 23:11:17.578835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.295 ms 00:29:50.331 [2024-12-09 23:11:17.578846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.331 [2024-12-09 23:11:17.600599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.331 [2024-12-09 23:11:17.600667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:50.332 [2024-12-09 23:11:17.600683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.759 ms 00:29:50.332 [2024-12-09 23:11:17.600693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.332 [2024-12-09 23:11:17.601391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:50.332 [2024-12-09 23:11:17.601409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:50.332 [2024-12-09 23:11:17.601420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:29:50.332 [2024-12-09 23:11:17.601431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.332 [2024-12-09 23:11:17.657823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.332 [2024-12-09 23:11:17.657896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:50.332 [2024-12-09 23:11:17.657912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.332 [2024-12-09 23:11:17.657930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.332 [2024-12-09 23:11:17.658046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.332 [2024-12-09 23:11:17.658058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:50.332 [2024-12-09 23:11:17.658069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.332 [2024-12-09 23:11:17.658079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.332 [2024-12-09 23:11:17.658136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.332 [2024-12-09 23:11:17.658149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:50.332 [2024-12-09 23:11:17.658160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.332 [2024-12-09 23:11:17.658170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.332 [2024-12-09 23:11:17.658196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.332 [2024-12-09 23:11:17.658207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:50.332 [2024-12-09 23:11:17.658218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.332 [2024-12-09 23:11:17.658228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.782077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.782147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:50.591 [2024-12-09 23:11:17.782164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.782184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.890841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.890927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:50.591 [2024-12-09 23:11:17.890943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.890954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.891065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.891078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:50.591 [2024-12-09 23:11:17.891090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.891101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.891133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.891155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:50.591 [2024-12-09 23:11:17.891167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.891177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.891292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.891307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:50.591 [2024-12-09 23:11:17.891318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.891329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.891370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.891383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:50.591 [2024-12-09 23:11:17.891398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.891408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.891480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.891495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:50.591 [2024-12-09 23:11:17.891506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.891517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.891565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:50.591 [2024-12-09 23:11:17.891601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:50.591 [2024-12-09 23:11:17.891612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:50.591 [2024-12-09 23:11:17.891623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:50.591 [2024-12-09 23:11:17.891771] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 563.990 ms, result 0 00:29:51.981 00:29:51.981 00:29:51.981 23:11:19 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79199 00:29:51.981 23:11:19 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:29:51.981 23:11:19 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79199 00:29:51.981 23:11:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79199 ']' 00:29:51.981 23:11:19 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.981 23:11:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.981 23:11:19 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.981 23:11:19 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.981 23:11:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:29:51.981 [2024-12-09 23:11:19.137164] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:29:51.981 [2024-12-09 23:11:19.137353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79199 ] 00:29:52.241 [2024-12-09 23:11:19.322492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.241 [2024-12-09 23:11:19.457556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.179 23:11:20 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.179 23:11:20 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:29:53.179 23:11:20 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:29:53.438 [2024-12-09 23:11:20.641132] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:53.438 [2024-12-09 23:11:20.641229] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:53.698 [2024-12-09 23:11:20.830764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.830838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:53.698 [2024-12-09 23:11:20.830861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:53.698 [2024-12-09 23:11:20.830873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.834950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.835002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:53.698 [2024-12-09 23:11:20.835019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.057 ms 00:29:53.698 [2024-12-09 23:11:20.835030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.835177] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:53.698 [2024-12-09 23:11:20.836111] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:53.698 [2024-12-09 23:11:20.836146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.836157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:53.698 [2024-12-09 23:11:20.836173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:29:53.698 [2024-12-09 23:11:20.836184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.837742] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:53.698 [2024-12-09 23:11:20.857737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.857816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:53.698 [2024-12-09 23:11:20.857835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.029 ms 00:29:53.698 [2024-12-09 23:11:20.857862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.858035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.858057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:53.698 [2024-12-09 23:11:20.858070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:29:53.698 [2024-12-09 23:11:20.858085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.865385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.865471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:53.698 [2024-12-09 23:11:20.865487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.245 ms 00:29:53.698 [2024-12-09 23:11:20.865504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.865679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.865701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:53.698 [2024-12-09 23:11:20.865713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:29:53.698 [2024-12-09 23:11:20.865737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.865770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.865787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:53.698 [2024-12-09 23:11:20.865799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:53.698 [2024-12-09 23:11:20.865814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.865842] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:53.698 [2024-12-09 23:11:20.870557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.870597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:53.698 [2024-12-09 23:11:20.870615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.725 ms 00:29:53.698 [2024-12-09 23:11:20.870626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.870731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.698 [2024-12-09 23:11:20.870746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:53.698 [2024-12-09 23:11:20.870763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:53.698 [2024-12-09 23:11:20.870779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.698 [2024-12-09 23:11:20.870808] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:53.698 [2024-12-09 23:11:20.870839] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:53.698 [2024-12-09 23:11:20.870912] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:53.698 [2024-12-09 23:11:20.870934] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:53.698 [2024-12-09 23:11:20.871032] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:53.699 [2024-12-09 23:11:20.871046] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:53.699 [2024-12-09 23:11:20.871072] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:53.699 [2024-12-09 23:11:20.871085] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871103] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871115] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:53.699 [2024-12-09 23:11:20.871131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:53.699 [2024-12-09 23:11:20.871141] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:53.699 [2024-12-09 23:11:20.871161] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:53.699 [2024-12-09 23:11:20.871173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.699 [2024-12-09 23:11:20.871188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:53.699 [2024-12-09 23:11:20.871200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:29:53.699 [2024-12-09 23:11:20.871214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.699 [2024-12-09 23:11:20.871296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.699 [2024-12-09 23:11:20.871312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:53.699 [2024-12-09 23:11:20.871323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:53.699 [2024-12-09 23:11:20.871338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.699 [2024-12-09 23:11:20.871430] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:53.699 [2024-12-09 23:11:20.871468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:53.699 [2024-12-09 23:11:20.871479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:53.699 [2024-12-09 23:11:20.871523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:53.699 [2024-12-09 23:11:20.871564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:53.699 [2024-12-09 23:11:20.871588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:53.699 [2024-12-09 23:11:20.871603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:53.699 [2024-12-09 23:11:20.871613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:53.699 [2024-12-09 23:11:20.871628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:53.699 [2024-12-09 23:11:20.871639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:53.699 [2024-12-09 23:11:20.871654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:53.699 [2024-12-09 23:11:20.871679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:53.699 [2024-12-09 23:11:20.871726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:53.699 [2024-12-09 23:11:20.871770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:53.699 [2024-12-09 23:11:20.871805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:53.699 [2024-12-09 23:11:20.871845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.699 [2024-12-09 23:11:20.871869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:53.699 [2024-12-09 23:11:20.871879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:53.699 [2024-12-09 23:11:20.871903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:53.699 [2024-12-09 23:11:20.871917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:53.699 [2024-12-09 23:11:20.871927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:53.699 [2024-12-09 23:11:20.871941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:53.699 [2024-12-09 23:11:20.871951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:53.699 [2024-12-09 23:11:20.871969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.699 [2024-12-09 23:11:20.871979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:53.699 [2024-12-09 23:11:20.871994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:53.699 [2024-12-09 23:11:20.872003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.699 [2024-12-09 23:11:20.872018] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:53.699 [2024-12-09 23:11:20.872034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:53.699 [2024-12-09 23:11:20.872054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:53.699 [2024-12-09 23:11:20.872065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.699 [2024-12-09 23:11:20.872081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:53.699 [2024-12-09 23:11:20.872091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:53.699 [2024-12-09 23:11:20.872106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:53.699 [2024-12-09 23:11:20.872116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:53.699 [2024-12-09 23:11:20.872130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:53.699 [2024-12-09 23:11:20.872140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:53.699 [2024-12-09 23:11:20.872156] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:53.699 [2024-12-09 23:11:20.872171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.699 [2024-12-09 23:11:20.872194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:53.699 [2024-12-09 23:11:20.872206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:53.699 [2024-12-09 23:11:20.872222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:53.699 [2024-12-09 23:11:20.872233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:53.699 [2024-12-09 23:11:20.872249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:53.699 [2024-12-09 23:11:20.872260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:53.699 [2024-12-09 23:11:20.872275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:53.699 [2024-12-09 23:11:20.872285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:53.699 [2024-12-09 23:11:20.872301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:53.699 [2024-12-09 23:11:20.872313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:53.699 [2024-12-09 23:11:20.872329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:53.699 [2024-12-09 23:11:20.872340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:53.699 [2024-12-09 23:11:20.872355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:53.699 [2024-12-09 23:11:20.872366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:53.699 [2024-12-09 23:11:20.872381] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:53.699 [2024-12-09 23:11:20.872393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.699 [2024-12-09 23:11:20.872414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:53.699 [2024-12-09 23:11:20.872425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:53.699 [2024-12-09 23:11:20.872440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:53.699 [2024-12-09 23:11:20.872461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:53.699 [2024-12-09 23:11:20.872479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.699 [2024-12-09 23:11:20.872490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:53.699 [2024-12-09 23:11:20.872507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:29:53.699 [2024-12-09 23:11:20.872541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.699 [2024-12-09 23:11:20.914780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.699 [2024-12-09 23:11:20.914851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:53.699 [2024-12-09 23:11:20.914874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.227 ms 00:29:53.699 [2024-12-09 23:11:20.914892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.699 [2024-12-09 23:11:20.915073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.699 [2024-12-09 23:11:20.915088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:53.699 [2024-12-09 23:11:20.915106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:53.700 [2024-12-09 23:11:20.915117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.700 [2024-12-09 23:11:20.969913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.700 [2024-12-09 23:11:20.969990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:53.700 [2024-12-09 23:11:20.970013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.847 ms 00:29:53.700 [2024-12-09 23:11:20.970025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.700 [2024-12-09 23:11:20.970172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.700 [2024-12-09 23:11:20.970186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:53.700 [2024-12-09 23:11:20.970203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:53.700 [2024-12-09 23:11:20.970214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.700 [2024-12-09 23:11:20.971052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.700 [2024-12-09 23:11:20.971081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:53.700 [2024-12-09 23:11:20.971098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:29:53.700 [2024-12-09 23:11:20.971110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.700 [2024-12-09 23:11:20.971244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.700 [2024-12-09 23:11:20.971259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:53.700 [2024-12-09 23:11:20.971275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:29:53.700 [2024-12-09 23:11:20.971286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.700 [2024-12-09 23:11:20.996727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.700 [2024-12-09 23:11:20.996794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:53.700 [2024-12-09 23:11:20.996815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.450 ms 00:29:53.700 [2024-12-09 23:11:20.996827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.700 [2024-12-09 23:11:21.030212] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:53.700 [2024-12-09 23:11:21.030287] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:53.700 [2024-12-09 23:11:21.030313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.700 [2024-12-09 23:11:21.030326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:53.700 [2024-12-09 23:11:21.030346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.362 ms 00:29:53.700 [2024-12-09 23:11:21.030371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.061858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.061941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:53.957 [2024-12-09 23:11:21.061966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.333 ms 00:29:53.957 [2024-12-09 23:11:21.061978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.082227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.082327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:53.957 [2024-12-09 23:11:21.082359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.097 ms 00:29:53.957 [2024-12-09 23:11:21.082370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.102339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.102425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:53.957 [2024-12-09 23:11:21.102474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.767 ms 00:29:53.957 [2024-12-09 23:11:21.102488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.103399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.103438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:53.957 [2024-12-09 23:11:21.103467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:29:53.957 [2024-12-09 23:11:21.103480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.195237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.195312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:53.957 [2024-12-09 23:11:21.195337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.854 ms 00:29:53.957 [2024-12-09 23:11:21.195349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.211179] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:53.957 [2024-12-09 23:11:21.232551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.232636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:53.957 [2024-12-09 23:11:21.232660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.046 ms 00:29:53.957 [2024-12-09 23:11:21.232677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.232800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.232821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:53.957 [2024-12-09 23:11:21.232834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:53.957 [2024-12-09 23:11:21.232851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.232908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.232926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:53.957 [2024-12-09 23:11:21.232938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:53.957 [2024-12-09 23:11:21.232960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.232986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.233004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:53.957 [2024-12-09 23:11:21.233016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:53.957 [2024-12-09 23:11:21.233033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.233079] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:53.957 [2024-12-09 23:11:21.233103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.233121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:53.957 [2024-12-09 23:11:21.233137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:53.957 [2024-12-09 23:11:21.233148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.274006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.274081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:53.957 [2024-12-09 23:11:21.274106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.872 ms 00:29:53.957 [2024-12-09 23:11:21.274118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.274298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.957 [2024-12-09 23:11:21.274313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:53.957 [2024-12-09 23:11:21.274331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:53.957 [2024-12-09 23:11:21.274349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.957 [2024-12-09 23:11:21.275710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:53.957 [2024-12-09 23:11:21.281657] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 445.265 ms, result 0 00:29:53.957 [2024-12-09 23:11:21.283143] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:54.215 Some configs were skipped because the RPC state that can call them passed over. 00:29:54.216 23:11:21 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:29:54.474 [2024-12-09 23:11:21.636673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.474 [2024-12-09 23:11:21.637051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:54.474 [2024-12-09 23:11:21.637176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.586 ms 00:29:54.474 [2024-12-09 23:11:21.637205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.474 [2024-12-09 23:11:21.637274] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.194 ms, result 0 00:29:54.474 true 00:29:54.474 23:11:21 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:29:54.732 [2024-12-09 23:11:21.856236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.732 [2024-12-09 23:11:21.856589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:29:54.732 [2024-12-09 23:11:21.856632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:29:54.732 [2024-12-09 23:11:21.856646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.732 [2024-12-09 23:11:21.856724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.747 ms, result 0 00:29:54.732 true 00:29:54.732 23:11:21 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79199 00:29:54.732 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79199 ']' 00:29:54.732 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79199 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79199 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79199' 00:29:54.733 killing process with pid 79199 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79199 00:29:54.733 23:11:21 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79199 00:29:56.110 [2024-12-09 23:11:23.083093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.083700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:56.110 [2024-12-09 23:11:23.083880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:56.110 [2024-12-09 23:11:23.083928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.083998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:29:56.110 [2024-12-09 23:11:23.088207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.088379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:56.110 [2024-12-09 23:11:23.088520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.151 ms 00:29:56.110 [2024-12-09 23:11:23.088539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.088849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.088873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:56.110 [2024-12-09 23:11:23.088888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:29:56.110 [2024-12-09 23:11:23.088899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.092480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.092521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:56.110 [2024-12-09 23:11:23.092542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:29:56.110 [2024-12-09 23:11:23.092554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.098277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.098324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:56.110 [2024-12-09 23:11:23.098345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.664 ms 00:29:56.110 [2024-12-09 23:11:23.098356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.113785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.114096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:56.110 [2024-12-09 23:11:23.114134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.368 ms 00:29:56.110 [2024-12-09 23:11:23.114146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.125302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.125601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:56.110 [2024-12-09 23:11:23.125703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.048 ms 00:29:56.110 [2024-12-09 23:11:23.125741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.125962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.126149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:56.110 [2024-12-09 23:11:23.126238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:29:56.110 [2024-12-09 23:11:23.126269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.142142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.142422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:56.110 [2024-12-09 23:11:23.142569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.825 ms 00:29:56.110 [2024-12-09 23:11:23.142610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.159032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.159307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:56.110 [2024-12-09 23:11:23.159446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.275 ms 00:29:56.110 [2024-12-09 23:11:23.159496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.174908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.175194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:56.110 [2024-12-09 23:11:23.175290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.311 ms 00:29:56.110 [2024-12-09 23:11:23.175328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.190844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.110 [2024-12-09 23:11:23.191115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:56.110 [2024-12-09 23:11:23.191211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.342 ms 00:29:56.110 [2024-12-09 23:11:23.191250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.110 [2024-12-09 23:11:23.191359] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:56.110 [2024-12-09 23:11:23.191408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.191565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.191623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.191676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.191776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.191841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.191892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.191987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.192803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:56.110 [2024-12-09 23:11:23.193249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.193995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:56.111 [2024-12-09 23:11:23.194140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:56.111 [2024-12-09 23:11:23.194162] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:29:56.111 [2024-12-09 23:11:23.194177] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:56.111 [2024-12-09 23:11:23.194190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:56.111 [2024-12-09 23:11:23.194201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:56.111 [2024-12-09 23:11:23.194215] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:56.111 [2024-12-09 23:11:23.194225] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:56.111 [2024-12-09 23:11:23.194238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:56.111 [2024-12-09 23:11:23.194248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:56.111 [2024-12-09 23:11:23.194260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:56.111 [2024-12-09 23:11:23.194270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:56.111 [2024-12-09 23:11:23.194285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.111 [2024-12-09 23:11:23.194296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:56.111 [2024-12-09 23:11:23.194312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.938 ms 00:29:56.111 [2024-12-09 23:11:23.194322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.111 [2024-12-09 23:11:23.215570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.111 [2024-12-09 23:11:23.215645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:56.111 [2024-12-09 23:11:23.215669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.229 ms 00:29:56.111 [2024-12-09 23:11:23.215680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.111 [2024-12-09 23:11:23.216355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.111 [2024-12-09 23:11:23.216368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:56.111 [2024-12-09 23:11:23.216386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:29:56.111 [2024-12-09 23:11:23.216397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.111 [2024-12-09 23:11:23.289102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.111 [2024-12-09 23:11:23.289178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:56.111 [2024-12-09 23:11:23.289200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.111 [2024-12-09 23:11:23.289212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.111 [2024-12-09 23:11:23.289379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.111 [2024-12-09 23:11:23.289393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:56.111 [2024-12-09 23:11:23.289412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.112 [2024-12-09 23:11:23.289423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.112 [2024-12-09 23:11:23.289507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.112 [2024-12-09 23:11:23.289522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:56.112 [2024-12-09 23:11:23.289550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.112 [2024-12-09 23:11:23.289563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.112 [2024-12-09 23:11:23.289589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.112 [2024-12-09 23:11:23.289600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:56.112 [2024-12-09 23:11:23.289616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.112 [2024-12-09 23:11:23.289632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.112 [2024-12-09 23:11:23.421185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.112 [2024-12-09 23:11:23.421266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:56.112 [2024-12-09 23:11:23.421297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.112 [2024-12-09 23:11:23.421308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.524952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.370 [2024-12-09 23:11:23.525262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:56.370 [2024-12-09 23:11:23.525302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.370 [2024-12-09 23:11:23.525321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.525487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.370 [2024-12-09 23:11:23.525503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:56.370 [2024-12-09 23:11:23.525526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.370 [2024-12-09 23:11:23.525537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.525575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.370 [2024-12-09 23:11:23.525587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:56.370 [2024-12-09 23:11:23.525603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.370 [2024-12-09 23:11:23.525615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.525765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.370 [2024-12-09 23:11:23.525779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:56.370 [2024-12-09 23:11:23.525795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.370 [2024-12-09 23:11:23.525806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.525856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.370 [2024-12-09 23:11:23.525868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:56.370 [2024-12-09 23:11:23.525884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.370 [2024-12-09 23:11:23.525895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.525946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.370 [2024-12-09 23:11:23.525957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:56.370 [2024-12-09 23:11:23.525978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.370 [2024-12-09 23:11:23.525989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.526040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.370 [2024-12-09 23:11:23.526053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:56.370 [2024-12-09 23:11:23.526068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.370 [2024-12-09 23:11:23.526079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.370 [2024-12-09 23:11:23.526235] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 443.830 ms, result 0 00:29:57.304 23:11:24 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:57.562 [2024-12-09 23:11:24.711838] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:29:57.562 [2024-12-09 23:11:24.711990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79269 ] 00:29:57.562 [2024-12-09 23:11:24.892028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.820 [2024-12-09 23:11:25.028023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.083 [2024-12-09 23:11:25.415167] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:58.083 [2024-12-09 23:11:25.415519] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:58.344 [2024-12-09 23:11:25.578475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.578790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:58.344 [2024-12-09 23:11:25.578821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:58.344 [2024-12-09 23:11:25.578833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.582242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.582417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:58.344 [2024-12-09 23:11:25.582443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.374 ms 00:29:58.344 [2024-12-09 23:11:25.582480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.582620] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:58.344 [2024-12-09 23:11:25.583642] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:58.344 [2024-12-09 23:11:25.583680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.583692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:58.344 [2024-12-09 23:11:25.583704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:29:58.344 [2024-12-09 23:11:25.583715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.585937] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:58.344 [2024-12-09 23:11:25.606672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.606744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:58.344 [2024-12-09 23:11:25.606762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.766 ms 00:29:58.344 [2024-12-09 23:11:25.606773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.606942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.606958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:58.344 [2024-12-09 23:11:25.606970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:58.344 [2024-12-09 23:11:25.606980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.618198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.618252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:58.344 [2024-12-09 23:11:25.618267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.187 ms 00:29:58.344 [2024-12-09 23:11:25.618295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.618446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.618501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:58.344 [2024-12-09 23:11:25.618513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:58.344 [2024-12-09 23:11:25.618524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.618562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.618574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:58.344 [2024-12-09 23:11:25.618585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:58.344 [2024-12-09 23:11:25.618595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.618622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:29:58.344 [2024-12-09 23:11:25.624376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.624599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:58.344 [2024-12-09 23:11:25.624625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.769 ms 00:29:58.344 [2024-12-09 23:11:25.624637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.624724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.624738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:58.344 [2024-12-09 23:11:25.624749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:58.344 [2024-12-09 23:11:25.624760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.624791] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:58.344 [2024-12-09 23:11:25.624816] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:58.344 [2024-12-09 23:11:25.624852] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:58.344 [2024-12-09 23:11:25.624871] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:58.344 [2024-12-09 23:11:25.624963] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:58.344 [2024-12-09 23:11:25.624977] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:58.344 [2024-12-09 23:11:25.624991] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:58.344 [2024-12-09 23:11:25.625007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:58.344 [2024-12-09 23:11:25.625020] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:58.344 [2024-12-09 23:11:25.625032] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:29:58.344 [2024-12-09 23:11:25.625042] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:58.344 [2024-12-09 23:11:25.625052] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:58.344 [2024-12-09 23:11:25.625062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:58.344 [2024-12-09 23:11:25.625072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.625082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:58.344 [2024-12-09 23:11:25.625094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:29:58.344 [2024-12-09 23:11:25.625104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.625180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.344 [2024-12-09 23:11:25.625195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:58.344 [2024-12-09 23:11:25.625206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:58.344 [2024-12-09 23:11:25.625215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.344 [2024-12-09 23:11:25.625303] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:58.344 [2024-12-09 23:11:25.625316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:58.344 [2024-12-09 23:11:25.625327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:58.344 [2024-12-09 23:11:25.625337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.344 [2024-12-09 23:11:25.625348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:58.344 [2024-12-09 23:11:25.625358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:58.344 [2024-12-09 23:11:25.625367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:29:58.344 [2024-12-09 23:11:25.625376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:58.344 [2024-12-09 23:11:25.625387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:29:58.344 [2024-12-09 23:11:25.625396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:58.344 [2024-12-09 23:11:25.625409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:58.344 [2024-12-09 23:11:25.625431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:29:58.344 [2024-12-09 23:11:25.625441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:58.344 [2024-12-09 23:11:25.625463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:58.345 [2024-12-09 23:11:25.625473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:29:58.345 [2024-12-09 23:11:25.625483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:58.345 [2024-12-09 23:11:25.625502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:29:58.345 [2024-12-09 23:11:25.625512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:58.345 [2024-12-09 23:11:25.625530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.345 [2024-12-09 23:11:25.625549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:58.345 [2024-12-09 23:11:25.625558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.345 [2024-12-09 23:11:25.625576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:58.345 [2024-12-09 23:11:25.625585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.345 [2024-12-09 23:11:25.625603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:58.345 [2024-12-09 23:11:25.625612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:58.345 [2024-12-09 23:11:25.625630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:58.345 [2024-12-09 23:11:25.625640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:58.345 [2024-12-09 23:11:25.625658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:58.345 [2024-12-09 23:11:25.625667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:29:58.345 [2024-12-09 23:11:25.625676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:58.345 [2024-12-09 23:11:25.625684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:58.345 [2024-12-09 23:11:25.625694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:29:58.345 [2024-12-09 23:11:25.625703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:58.345 [2024-12-09 23:11:25.625721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:29:58.345 [2024-12-09 23:11:25.625731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625741] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:58.345 [2024-12-09 23:11:25.625752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:58.345 [2024-12-09 23:11:25.625766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:58.345 [2024-12-09 23:11:25.625776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:58.345 [2024-12-09 23:11:25.625786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:58.345 [2024-12-09 23:11:25.625796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:58.345 [2024-12-09 23:11:25.625806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:58.345 [2024-12-09 23:11:25.625816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:58.345 [2024-12-09 23:11:25.625825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:58.345 [2024-12-09 23:11:25.625835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:58.345 [2024-12-09 23:11:25.625845] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:58.345 [2024-12-09 23:11:25.625858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:58.345 [2024-12-09 23:11:25.625871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:29:58.345 [2024-12-09 23:11:25.625881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:29:58.345 [2024-12-09 23:11:25.625891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:29:58.345 [2024-12-09 23:11:25.625901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:29:58.345 [2024-12-09 23:11:25.625911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:29:58.345 [2024-12-09 23:11:25.625921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:29:58.345 [2024-12-09 23:11:25.625932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:29:58.345 [2024-12-09 23:11:25.625942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:29:58.345 [2024-12-09 23:11:25.625952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:29:58.345 [2024-12-09 23:11:25.625962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:29:58.345 [2024-12-09 23:11:25.625972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:29:58.345 [2024-12-09 23:11:25.625983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:29:58.345 [2024-12-09 23:11:25.625993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:29:58.345 [2024-12-09 23:11:25.626004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:29:58.345 [2024-12-09 23:11:25.626014] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:58.345 [2024-12-09 23:11:25.626027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:58.345 [2024-12-09 23:11:25.626039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:58.345 [2024-12-09 23:11:25.626049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:58.345 [2024-12-09 23:11:25.626060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:58.345 [2024-12-09 23:11:25.626071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:58.345 [2024-12-09 23:11:25.626082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.345 [2024-12-09 23:11:25.626098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:58.345 [2024-12-09 23:11:25.626108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:29:58.345 [2024-12-09 23:11:25.626119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.345 [2024-12-09 23:11:25.666878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.345 [2024-12-09 23:11:25.667184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:58.345 [2024-12-09 23:11:25.667212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.761 ms 00:29:58.345 [2024-12-09 23:11:25.667224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.345 [2024-12-09 23:11:25.667409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.345 [2024-12-09 23:11:25.667423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:58.345 [2024-12-09 23:11:25.667435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:58.345 [2024-12-09 23:11:25.667445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.726290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.726362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:58.604 [2024-12-09 23:11:25.726384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.892 ms 00:29:58.604 [2024-12-09 23:11:25.726396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.726571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.726604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:58.604 [2024-12-09 23:11:25.726616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:58.604 [2024-12-09 23:11:25.726627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.727083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.727109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:58.604 [2024-12-09 23:11:25.727129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:29:58.604 [2024-12-09 23:11:25.727140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.727279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.727293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:58.604 [2024-12-09 23:11:25.727304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:29:58.604 [2024-12-09 23:11:25.727316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.750310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.750649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:58.604 [2024-12-09 23:11:25.750677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.005 ms 00:29:58.604 [2024-12-09 23:11:25.750689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.772670] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:58.604 [2024-12-09 23:11:25.772749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:58.604 [2024-12-09 23:11:25.772768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.772780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:58.604 [2024-12-09 23:11:25.772795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.938 ms 00:29:58.604 [2024-12-09 23:11:25.772806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.805434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.805530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:58.604 [2024-12-09 23:11:25.805549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.524 ms 00:29:58.604 [2024-12-09 23:11:25.805560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.826921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.826999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:58.604 [2024-12-09 23:11:25.827017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.226 ms 00:29:58.604 [2024-12-09 23:11:25.827027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.848055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.848131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:58.604 [2024-12-09 23:11:25.848148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.887 ms 00:29:58.604 [2024-12-09 23:11:25.848159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.604 [2024-12-09 23:11:25.849069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.604 [2024-12-09 23:11:25.849101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:58.604 [2024-12-09 23:11:25.849114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:29:58.604 [2024-12-09 23:11:25.849125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:25.944394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:25.944505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:58.883 [2024-12-09 23:11:25.944524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.386 ms 00:29:58.883 [2024-12-09 23:11:25.944535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:25.959958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:58.883 [2024-12-09 23:11:25.982280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:25.982356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:58.883 [2024-12-09 23:11:25.982374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.604 ms 00:29:58.883 [2024-12-09 23:11:25.982393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:25.982572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:25.982589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:58.883 [2024-12-09 23:11:25.982601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:58.883 [2024-12-09 23:11:25.982611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:25.982677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:25.982689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:58.883 [2024-12-09 23:11:25.982701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:29:58.883 [2024-12-09 23:11:25.982717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:25.982757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:25.982772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:58.883 [2024-12-09 23:11:25.982782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:58.883 [2024-12-09 23:11:25.982793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:25.982830] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:58.883 [2024-12-09 23:11:25.982844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:25.982854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:58.883 [2024-12-09 23:11:25.982865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:58.883 [2024-12-09 23:11:25.982875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:26.023489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:26.023568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:58.883 [2024-12-09 23:11:26.023587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.649 ms 00:29:58.883 [2024-12-09 23:11:26.023599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:26.023794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:58.883 [2024-12-09 23:11:26.023809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:58.883 [2024-12-09 23:11:26.023822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:58.883 [2024-12-09 23:11:26.023832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:58.883 [2024-12-09 23:11:26.024856] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:58.883 [2024-12-09 23:11:26.030414] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.767 ms, result 0 00:29:58.883 [2024-12-09 23:11:26.031536] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:58.883 [2024-12-09 23:11:26.051368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:59.819  [2024-12-09T23:11:28.531Z] Copying: 29/256 [MB] (29 MBps) [2024-12-09T23:11:29.466Z] Copying: 54/256 [MB] (25 MBps) [2024-12-09T23:11:30.411Z] Copying: 81/256 [MB] (26 MBps) [2024-12-09T23:11:31.369Z] Copying: 107/256 [MB] (26 MBps) [2024-12-09T23:11:32.304Z] Copying: 133/256 [MB] (26 MBps) [2024-12-09T23:11:33.243Z] Copying: 159/256 [MB] (25 MBps) [2024-12-09T23:11:34.179Z] Copying: 185/256 [MB] (26 MBps) [2024-12-09T23:11:35.119Z] Copying: 209/256 [MB] (24 MBps) [2024-12-09T23:11:36.056Z] Copying: 234/256 [MB] (24 MBps) [2024-12-09T23:11:36.623Z] Copying: 256/256 [MB] (average 25 MBps)[2024-12-09 23:11:36.337361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:09.287 [2024-12-09 23:11:36.353922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.354003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:09.287 [2024-12-09 23:11:36.354034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:09.287 [2024-12-09 23:11:36.354046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.354077] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:09.287 [2024-12-09 23:11:36.359105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.359327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:09.287 [2024-12-09 23:11:36.359498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.015 ms 00:30:09.287 [2024-12-09 23:11:36.359511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.359798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.359811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:09.287 [2024-12-09 23:11:36.359823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:30:09.287 [2024-12-09 23:11:36.359833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.362724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.362752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:09.287 [2024-12-09 23:11:36.362765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:30:09.287 [2024-12-09 23:11:36.362776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.369097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.369174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:09.287 [2024-12-09 23:11:36.369188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.306 ms 00:30:09.287 [2024-12-09 23:11:36.369199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.413666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.414186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:09.287 [2024-12-09 23:11:36.414218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.430 ms 00:30:09.287 [2024-12-09 23:11:36.414231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.438177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.438253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:09.287 [2024-12-09 23:11:36.438284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.867 ms 00:30:09.287 [2024-12-09 23:11:36.438296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.438559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.438577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:09.287 [2024-12-09 23:11:36.438604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:30:09.287 [2024-12-09 23:11:36.438615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.481207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.481567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:09.287 [2024-12-09 23:11:36.481597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.637 ms 00:30:09.287 [2024-12-09 23:11:36.481609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.523794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.523879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:09.287 [2024-12-09 23:11:36.523898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.136 ms 00:30:09.287 [2024-12-09 23:11:36.523908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.565743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.565827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:09.287 [2024-12-09 23:11:36.565845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.771 ms 00:30:09.287 [2024-12-09 23:11:36.565857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.608172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.287 [2024-12-09 23:11:36.608518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:09.287 [2024-12-09 23:11:36.608546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.218 ms 00:30:09.287 [2024-12-09 23:11:36.608558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.287 [2024-12-09 23:11:36.608666] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:09.287 [2024-12-09 23:11:36.608690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.608994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:09.287 [2024-12-09 23:11:36.609115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:09.288 [2024-12-09 23:11:36.609842] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:09.288 [2024-12-09 23:11:36.609853] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e2a83606-4f0a-47b8-82fe-3fe8d4df16c8 00:30:09.288 [2024-12-09 23:11:36.609864] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:09.288 [2024-12-09 23:11:36.609874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:09.288 [2024-12-09 23:11:36.609885] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:09.288 [2024-12-09 23:11:36.609896] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:09.288 [2024-12-09 23:11:36.609907] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:09.288 [2024-12-09 23:11:36.609918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:09.288 [2024-12-09 23:11:36.609934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:09.288 [2024-12-09 23:11:36.609944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:09.288 [2024-12-09 23:11:36.609953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:09.288 [2024-12-09 23:11:36.609965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.288 [2024-12-09 23:11:36.609975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:09.288 [2024-12-09 23:11:36.609988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:30:09.288 [2024-12-09 23:11:36.609998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.546 [2024-12-09 23:11:36.632229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.546 [2024-12-09 23:11:36.632533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:09.546 [2024-12-09 23:11:36.632564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.236 ms 00:30:09.546 [2024-12-09 23:11:36.632577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.546 [2024-12-09 23:11:36.633210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:09.546 [2024-12-09 23:11:36.633228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:09.546 [2024-12-09 23:11:36.633241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:30:09.546 [2024-12-09 23:11:36.633251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.546 [2024-12-09 23:11:36.692248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.546 [2024-12-09 23:11:36.692491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:09.546 [2024-12-09 23:11:36.692518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.546 [2024-12-09 23:11:36.692538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.546 [2024-12-09 23:11:36.692688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.546 [2024-12-09 23:11:36.692701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:09.546 [2024-12-09 23:11:36.692712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.546 [2024-12-09 23:11:36.692723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.546 [2024-12-09 23:11:36.692793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.546 [2024-12-09 23:11:36.692807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:09.546 [2024-12-09 23:11:36.692818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.546 [2024-12-09 23:11:36.692828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.546 [2024-12-09 23:11:36.692852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.546 [2024-12-09 23:11:36.692863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:09.546 [2024-12-09 23:11:36.692874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.546 [2024-12-09 23:11:36.692884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.546 [2024-12-09 23:11:36.820276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.546 [2024-12-09 23:11:36.820606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:09.546 [2024-12-09 23:11:36.820635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.547 [2024-12-09 23:11:36.820647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.926382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.805 [2024-12-09 23:11:36.926507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:09.805 [2024-12-09 23:11:36.926524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.805 [2024-12-09 23:11:36.926536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.926648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.805 [2024-12-09 23:11:36.926662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:09.805 [2024-12-09 23:11:36.926674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.805 [2024-12-09 23:11:36.926685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.926714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.805 [2024-12-09 23:11:36.926737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:09.805 [2024-12-09 23:11:36.926748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.805 [2024-12-09 23:11:36.926759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.926891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.805 [2024-12-09 23:11:36.926913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:09.805 [2024-12-09 23:11:36.926924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.805 [2024-12-09 23:11:36.926934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.926976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.805 [2024-12-09 23:11:36.926988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:09.805 [2024-12-09 23:11:36.927003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.805 [2024-12-09 23:11:36.927013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.927057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.805 [2024-12-09 23:11:36.927070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:09.805 [2024-12-09 23:11:36.927080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.805 [2024-12-09 23:11:36.927090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.927144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:09.805 [2024-12-09 23:11:36.927161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:09.805 [2024-12-09 23:11:36.927171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:09.805 [2024-12-09 23:11:36.927182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:09.805 [2024-12-09 23:11:36.927338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 574.361 ms, result 0 00:30:10.738 00:30:10.738 00:30:10.738 23:11:38 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:11.306 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:30:11.307 23:11:38 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:30:11.307 23:11:38 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:30:11.307 23:11:38 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:11.307 23:11:38 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:11.307 23:11:38 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:30:11.307 23:11:38 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:30:11.307 23:11:38 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79199 00:30:11.307 23:11:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79199 ']' 00:30:11.307 23:11:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79199 00:30:11.307 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79199) - No such process 00:30:11.307 Process with pid 79199 is not found 00:30:11.307 23:11:38 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79199 is not found' 00:30:11.307 00:30:11.307 real 1m9.990s 00:30:11.307 user 1m31.910s 00:30:11.307 sys 0m7.655s 00:30:11.307 23:11:38 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:11.307 ************************************ 00:30:11.307 END TEST ftl_trim 00:30:11.307 ************************************ 00:30:11.307 23:11:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:30:11.565 23:11:38 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:11.565 23:11:38 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:11.565 23:11:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:11.565 23:11:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:11.565 ************************************ 00:30:11.565 START TEST ftl_restore 00:30:11.565 ************************************ 00:30:11.565 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:30:11.565 * Looking for test storage... 00:30:11.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.565 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:11.565 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:30:11.565 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.825 23:11:38 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.825 --rc genhtml_branch_coverage=1 00:30:11.825 --rc genhtml_function_coverage=1 00:30:11.825 --rc genhtml_legend=1 00:30:11.825 --rc geninfo_all_blocks=1 00:30:11.825 --rc geninfo_unexecuted_blocks=1 00:30:11.825 00:30:11.825 ' 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.825 --rc genhtml_branch_coverage=1 00:30:11.825 --rc genhtml_function_coverage=1 00:30:11.825 --rc genhtml_legend=1 00:30:11.825 --rc geninfo_all_blocks=1 00:30:11.825 --rc geninfo_unexecuted_blocks=1 00:30:11.825 00:30:11.825 ' 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.825 --rc genhtml_branch_coverage=1 00:30:11.825 --rc genhtml_function_coverage=1 00:30:11.825 --rc genhtml_legend=1 00:30:11.825 --rc geninfo_all_blocks=1 00:30:11.825 --rc geninfo_unexecuted_blocks=1 00:30:11.825 00:30:11.825 ' 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:11.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.825 --rc genhtml_branch_coverage=1 00:30:11.825 --rc genhtml_function_coverage=1 00:30:11.825 --rc genhtml_legend=1 00:30:11.825 --rc geninfo_all_blocks=1 00:30:11.825 --rc geninfo_unexecuted_blocks=1 00:30:11.825 00:30:11.825 ' 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Q6cKyuYQyF 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79479 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79479 00:30:11.825 23:11:38 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79479 ']' 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:11.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:11.825 23:11:38 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:30:11.825 [2024-12-09 23:11:39.110258] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:30:11.825 [2024-12-09 23:11:39.110408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79479 ] 00:30:12.084 [2024-12-09 23:11:39.305942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.342 [2024-12-09 23:11:39.452055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.279 23:11:40 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:13.279 23:11:40 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:30:13.280 23:11:40 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:13.280 23:11:40 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:30:13.280 23:11:40 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:13.280 23:11:40 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:30:13.280 23:11:40 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:30:13.280 23:11:40 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:13.538 23:11:40 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:13.538 23:11:40 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:30:13.538 23:11:40 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:13.538 23:11:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:13.538 23:11:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:13.538 23:11:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:30:13.538 23:11:40 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:30:13.538 23:11:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:13.798 23:11:41 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:13.798 { 00:30:13.798 "name": "nvme0n1", 00:30:13.798 "aliases": [ 00:30:13.798 "2bf31ea3-23c5-4b08-9b34-ad11340559d7" 00:30:13.798 ], 00:30:13.798 "product_name": "NVMe disk", 00:30:13.798 "block_size": 4096, 00:30:13.798 "num_blocks": 1310720, 00:30:13.798 "uuid": "2bf31ea3-23c5-4b08-9b34-ad11340559d7", 00:30:13.798 "numa_id": -1, 00:30:13.798 "assigned_rate_limits": { 00:30:13.798 "rw_ios_per_sec": 0, 00:30:13.798 "rw_mbytes_per_sec": 0, 00:30:13.798 "r_mbytes_per_sec": 0, 00:30:13.798 "w_mbytes_per_sec": 0 00:30:13.798 }, 00:30:13.798 "claimed": true, 00:30:13.798 "claim_type": "read_many_write_one", 00:30:13.798 "zoned": false, 00:30:13.798 "supported_io_types": { 00:30:13.798 "read": true, 00:30:13.798 "write": true, 00:30:13.798 "unmap": true, 00:30:13.798 "flush": true, 00:30:13.798 "reset": true, 00:30:13.798 "nvme_admin": true, 00:30:13.798 "nvme_io": true, 00:30:13.798 "nvme_io_md": false, 00:30:13.798 "write_zeroes": true, 00:30:13.798 "zcopy": false, 00:30:13.798 "get_zone_info": false, 00:30:13.798 "zone_management": false, 00:30:13.798 "zone_append": false, 00:30:13.798 "compare": true, 00:30:13.798 "compare_and_write": false, 00:30:13.798 "abort": true, 00:30:13.798 "seek_hole": false, 00:30:13.798 "seek_data": false, 00:30:13.798 "copy": true, 00:30:13.798 "nvme_iov_md": false 00:30:13.798 }, 00:30:13.798 "driver_specific": { 00:30:13.798 "nvme": [ 00:30:13.798 { 00:30:13.798 "pci_address": "0000:00:11.0", 00:30:13.798 "trid": { 00:30:13.798 "trtype": "PCIe", 00:30:13.798 "traddr": "0000:00:11.0" 00:30:13.798 }, 00:30:13.798 "ctrlr_data": { 00:30:13.798 "cntlid": 0, 00:30:13.798 "vendor_id": "0x1b36", 00:30:13.798 "model_number": "QEMU NVMe Ctrl", 00:30:13.798 "serial_number": "12341", 00:30:13.798 "firmware_revision": "8.0.0", 00:30:13.798 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:13.798 "oacs": { 00:30:13.798 "security": 0, 00:30:13.798 "format": 1, 00:30:13.798 "firmware": 0, 00:30:13.798 "ns_manage": 1 00:30:13.798 }, 00:30:13.798 "multi_ctrlr": false, 00:30:13.798 "ana_reporting": false 00:30:13.798 }, 00:30:13.798 "vs": { 00:30:13.798 "nvme_version": "1.4" 00:30:13.798 }, 00:30:13.798 "ns_data": { 00:30:13.798 "id": 1, 00:30:13.798 "can_share": false 00:30:13.798 } 00:30:13.798 } 00:30:13.798 ], 00:30:13.798 "mp_policy": "active_passive" 00:30:13.798 } 00:30:13.798 } 00:30:13.798 ]' 00:30:13.798 23:11:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:13.798 23:11:41 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:30:13.798 23:11:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:14.056 23:11:41 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:14.056 23:11:41 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:14.056 23:11:41 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:30:14.056 23:11:41 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:30:14.056 23:11:41 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:14.056 23:11:41 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:30:14.056 23:11:41 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:14.056 23:11:41 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:14.313 23:11:41 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b1efb439-0089-4b2f-a963-a315768df67c 00:30:14.313 23:11:41 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:30:14.313 23:11:41 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1efb439-0089-4b2f-a963-a315768df67c 00:30:14.571 23:11:41 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:14.829 23:11:41 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=83556be6-7d3c-496e-b993-3b4965b65d60 00:30:14.829 23:11:41 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 83556be6-7d3c-496e-b993-3b4965b65d60 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:30:15.152 23:11:42 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:15.152 { 00:30:15.152 "name": "0880c603-2d14-4559-9bb9-3fc146714e74", 00:30:15.152 "aliases": [ 00:30:15.152 "lvs/nvme0n1p0" 00:30:15.152 ], 00:30:15.152 "product_name": "Logical Volume", 00:30:15.152 "block_size": 4096, 00:30:15.152 "num_blocks": 26476544, 00:30:15.152 "uuid": "0880c603-2d14-4559-9bb9-3fc146714e74", 00:30:15.152 "assigned_rate_limits": { 00:30:15.152 "rw_ios_per_sec": 0, 00:30:15.152 "rw_mbytes_per_sec": 0, 00:30:15.152 "r_mbytes_per_sec": 0, 00:30:15.152 "w_mbytes_per_sec": 0 00:30:15.152 }, 00:30:15.152 "claimed": false, 00:30:15.152 "zoned": false, 00:30:15.152 "supported_io_types": { 00:30:15.152 "read": true, 00:30:15.152 "write": true, 00:30:15.152 "unmap": true, 00:30:15.152 "flush": false, 00:30:15.152 "reset": true, 00:30:15.152 "nvme_admin": false, 00:30:15.152 "nvme_io": false, 00:30:15.152 "nvme_io_md": false, 00:30:15.152 "write_zeroes": true, 00:30:15.152 "zcopy": false, 00:30:15.152 "get_zone_info": false, 00:30:15.152 "zone_management": false, 00:30:15.152 "zone_append": false, 00:30:15.152 "compare": false, 00:30:15.152 "compare_and_write": false, 00:30:15.152 "abort": false, 00:30:15.152 "seek_hole": true, 00:30:15.152 "seek_data": true, 00:30:15.152 "copy": false, 00:30:15.152 "nvme_iov_md": false 00:30:15.152 }, 00:30:15.152 "driver_specific": { 00:30:15.152 "lvol": { 00:30:15.152 "lvol_store_uuid": "83556be6-7d3c-496e-b993-3b4965b65d60", 00:30:15.152 "base_bdev": "nvme0n1", 00:30:15.152 "thin_provision": true, 00:30:15.152 "num_allocated_clusters": 0, 00:30:15.152 "snapshot": false, 00:30:15.152 "clone": false, 00:30:15.152 "esnap_clone": false 00:30:15.152 } 00:30:15.152 } 00:30:15.152 } 00:30:15.152 ]' 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:30:15.152 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:15.410 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:15.410 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:15.410 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:30:15.410 23:11:42 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:30:15.410 23:11:42 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:30:15.410 23:11:42 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:15.669 23:11:42 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:15.669 23:11:42 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:15.669 23:11:42 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.669 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.669 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:15.669 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:30:15.669 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:30:15.669 23:11:42 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0880c603-2d14-4559-9bb9-3fc146714e74 00:30:15.927 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:15.927 { 00:30:15.927 "name": "0880c603-2d14-4559-9bb9-3fc146714e74", 00:30:15.927 "aliases": [ 00:30:15.927 "lvs/nvme0n1p0" 00:30:15.927 ], 00:30:15.927 "product_name": "Logical Volume", 00:30:15.927 "block_size": 4096, 00:30:15.927 "num_blocks": 26476544, 00:30:15.927 "uuid": "0880c603-2d14-4559-9bb9-3fc146714e74", 00:30:15.927 "assigned_rate_limits": { 00:30:15.927 "rw_ios_per_sec": 0, 00:30:15.927 "rw_mbytes_per_sec": 0, 00:30:15.927 "r_mbytes_per_sec": 0, 00:30:15.927 "w_mbytes_per_sec": 0 00:30:15.927 }, 00:30:15.927 "claimed": false, 00:30:15.927 "zoned": false, 00:30:15.927 "supported_io_types": { 00:30:15.927 "read": true, 00:30:15.927 "write": true, 00:30:15.927 "unmap": true, 00:30:15.927 "flush": false, 00:30:15.927 "reset": true, 00:30:15.927 "nvme_admin": false, 00:30:15.927 "nvme_io": false, 00:30:15.927 "nvme_io_md": false, 00:30:15.927 "write_zeroes": true, 00:30:15.927 "zcopy": false, 00:30:15.927 "get_zone_info": false, 00:30:15.927 "zone_management": false, 00:30:15.927 "zone_append": false, 00:30:15.927 "compare": false, 00:30:15.927 "compare_and_write": false, 00:30:15.927 "abort": false, 00:30:15.927 "seek_hole": true, 00:30:15.927 "seek_data": true, 00:30:15.927 "copy": false, 00:30:15.927 "nvme_iov_md": false 00:30:15.927 }, 00:30:15.927 "driver_specific": { 00:30:15.927 "lvol": { 00:30:15.927 "lvol_store_uuid": "83556be6-7d3c-496e-b993-3b4965b65d60", 00:30:15.927 "base_bdev": "nvme0n1", 00:30:15.927 "thin_provision": true, 00:30:15.927 "num_allocated_clusters": 0, 00:30:15.927 "snapshot": false, 00:30:15.927 "clone": false, 00:30:15.927 "esnap_clone": false 00:30:15.927 } 00:30:15.927 } 00:30:15.928 } 00:30:15.928 ]' 00:30:15.928 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:15.928 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:30:15.928 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:15.928 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:15.928 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:15.928 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:30:15.928 23:11:43 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:30:15.928 23:11:43 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:16.186 23:11:43 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:16.186 23:11:43 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 0880c603-2d14-4559-9bb9-3fc146714e74 00:30:16.186 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=0880c603-2d14-4559-9bb9-3fc146714e74 00:30:16.186 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:16.186 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:30:16.186 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:30:16.186 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0880c603-2d14-4559-9bb9-3fc146714e74 00:30:16.444 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:16.444 { 00:30:16.444 "name": "0880c603-2d14-4559-9bb9-3fc146714e74", 00:30:16.444 "aliases": [ 00:30:16.444 "lvs/nvme0n1p0" 00:30:16.444 ], 00:30:16.444 "product_name": "Logical Volume", 00:30:16.444 "block_size": 4096, 00:30:16.444 "num_blocks": 26476544, 00:30:16.444 "uuid": "0880c603-2d14-4559-9bb9-3fc146714e74", 00:30:16.444 "assigned_rate_limits": { 00:30:16.444 "rw_ios_per_sec": 0, 00:30:16.444 "rw_mbytes_per_sec": 0, 00:30:16.444 "r_mbytes_per_sec": 0, 00:30:16.444 "w_mbytes_per_sec": 0 00:30:16.444 }, 00:30:16.444 "claimed": false, 00:30:16.444 "zoned": false, 00:30:16.444 "supported_io_types": { 00:30:16.444 "read": true, 00:30:16.444 "write": true, 00:30:16.444 "unmap": true, 00:30:16.444 "flush": false, 00:30:16.444 "reset": true, 00:30:16.444 "nvme_admin": false, 00:30:16.444 "nvme_io": false, 00:30:16.444 "nvme_io_md": false, 00:30:16.444 "write_zeroes": true, 00:30:16.444 "zcopy": false, 00:30:16.444 "get_zone_info": false, 00:30:16.444 "zone_management": false, 00:30:16.444 "zone_append": false, 00:30:16.444 "compare": false, 00:30:16.444 "compare_and_write": false, 00:30:16.444 "abort": false, 00:30:16.444 "seek_hole": true, 00:30:16.444 "seek_data": true, 00:30:16.444 "copy": false, 00:30:16.444 "nvme_iov_md": false 00:30:16.444 }, 00:30:16.444 "driver_specific": { 00:30:16.444 "lvol": { 00:30:16.444 "lvol_store_uuid": "83556be6-7d3c-496e-b993-3b4965b65d60", 00:30:16.444 "base_bdev": "nvme0n1", 00:30:16.444 "thin_provision": true, 00:30:16.444 "num_allocated_clusters": 0, 00:30:16.444 "snapshot": false, 00:30:16.444 "clone": false, 00:30:16.444 "esnap_clone": false 00:30:16.444 } 00:30:16.444 } 00:30:16.444 } 00:30:16.444 ]' 00:30:16.444 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:16.444 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:30:16.444 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:16.702 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:16.702 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:16.702 23:11:43 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:30:16.702 23:11:43 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:16.702 23:11:43 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0880c603-2d14-4559-9bb9-3fc146714e74 --l2p_dram_limit 10' 00:30:16.702 23:11:43 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:16.702 23:11:43 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:16.702 23:11:43 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:16.702 23:11:43 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:30:16.702 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:30:16.702 23:11:43 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0880c603-2d14-4559-9bb9-3fc146714e74 --l2p_dram_limit 10 -c nvc0n1p0 00:30:16.961 [2024-12-09 23:11:44.089054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.961 [2024-12-09 23:11:44.089113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:16.961 [2024-12-09 23:11:44.089136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:16.961 [2024-12-09 23:11:44.089149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.961 [2024-12-09 23:11:44.089228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.961 [2024-12-09 23:11:44.089242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:16.961 [2024-12-09 23:11:44.089258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:16.961 [2024-12-09 23:11:44.089269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.961 [2024-12-09 23:11:44.089303] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:16.961 [2024-12-09 23:11:44.090441] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:16.961 [2024-12-09 23:11:44.090511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.961 [2024-12-09 23:11:44.090526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:16.961 [2024-12-09 23:11:44.090541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:30:16.961 [2024-12-09 23:11:44.090553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.961 [2024-12-09 23:11:44.090807] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c2221666-d845-4992-88fe-319b86a6eed4 00:30:16.961 [2024-12-09 23:11:44.093304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.961 [2024-12-09 23:11:44.093339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:16.961 [2024-12-09 23:11:44.093353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:30:16.961 [2024-12-09 23:11:44.093367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.961 [2024-12-09 23:11:44.106916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.961 [2024-12-09 23:11:44.106978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:16.961 [2024-12-09 23:11:44.106993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.454 ms 00:30:16.961 [2024-12-09 23:11:44.107006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.962 [2024-12-09 23:11:44.107132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.962 [2024-12-09 23:11:44.107168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:16.962 [2024-12-09 23:11:44.107181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:16.962 [2024-12-09 23:11:44.107200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.962 [2024-12-09 23:11:44.107286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.962 [2024-12-09 23:11:44.107308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:16.962 [2024-12-09 23:11:44.107324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:16.962 [2024-12-09 23:11:44.107338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.962 [2024-12-09 23:11:44.107368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:16.962 [2024-12-09 23:11:44.112911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.962 [2024-12-09 23:11:44.112949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:16.962 [2024-12-09 23:11:44.112964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.556 ms 00:30:16.962 [2024-12-09 23:11:44.112975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.962 [2024-12-09 23:11:44.113024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.962 [2024-12-09 23:11:44.113036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:16.962 [2024-12-09 23:11:44.113049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:16.962 [2024-12-09 23:11:44.113059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.962 [2024-12-09 23:11:44.113109] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:16.962 [2024-12-09 23:11:44.113247] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:16.962 [2024-12-09 23:11:44.113268] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:16.962 [2024-12-09 23:11:44.113293] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:16.962 [2024-12-09 23:11:44.113310] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113323] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113339] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:16.962 [2024-12-09 23:11:44.113349] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:16.962 [2024-12-09 23:11:44.113365] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:16.962 [2024-12-09 23:11:44.113376] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:16.962 [2024-12-09 23:11:44.113389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.962 [2024-12-09 23:11:44.113409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:16.962 [2024-12-09 23:11:44.113423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:30:16.962 [2024-12-09 23:11:44.113434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.962 [2024-12-09 23:11:44.113526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.962 [2024-12-09 23:11:44.113538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:16.962 [2024-12-09 23:11:44.113551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:16.962 [2024-12-09 23:11:44.113562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.962 [2024-12-09 23:11:44.113655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:16.962 [2024-12-09 23:11:44.113668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:16.962 [2024-12-09 23:11:44.113682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:16.962 [2024-12-09 23:11:44.113715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:16.962 [2024-12-09 23:11:44.113751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.962 [2024-12-09 23:11:44.113773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:16.962 [2024-12-09 23:11:44.113783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:16.962 [2024-12-09 23:11:44.113795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:16.962 [2024-12-09 23:11:44.113805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:16.962 [2024-12-09 23:11:44.113816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:16.962 [2024-12-09 23:11:44.113826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:16.962 [2024-12-09 23:11:44.113850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:16.962 [2024-12-09 23:11:44.113882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:16.962 [2024-12-09 23:11:44.113912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:16.962 [2024-12-09 23:11:44.113944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:16.962 [2024-12-09 23:11:44.113974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:16.962 [2024-12-09 23:11:44.113985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:16.962 [2024-12-09 23:11:44.113994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:16.962 [2024-12-09 23:11:44.114008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:16.962 [2024-12-09 23:11:44.114017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.962 [2024-12-09 23:11:44.114030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:16.962 [2024-12-09 23:11:44.114039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:16.962 [2024-12-09 23:11:44.114050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:16.962 [2024-12-09 23:11:44.114060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:16.962 [2024-12-09 23:11:44.114071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:16.962 [2024-12-09 23:11:44.114080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.962 [2024-12-09 23:11:44.114092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:16.962 [2024-12-09 23:11:44.114102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:16.962 [2024-12-09 23:11:44.114114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.962 [2024-12-09 23:11:44.114124] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:16.962 [2024-12-09 23:11:44.114137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:16.962 [2024-12-09 23:11:44.114146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:16.962 [2024-12-09 23:11:44.114159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:16.962 [2024-12-09 23:11:44.114170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:16.962 [2024-12-09 23:11:44.114185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:16.962 [2024-12-09 23:11:44.114195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:16.962 [2024-12-09 23:11:44.114208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:16.962 [2024-12-09 23:11:44.114217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:16.962 [2024-12-09 23:11:44.114229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:16.962 [2024-12-09 23:11:44.114240] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:16.962 [2024-12-09 23:11:44.114260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.962 [2024-12-09 23:11:44.114272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:16.962 [2024-12-09 23:11:44.114286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:16.962 [2024-12-09 23:11:44.114296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:16.962 [2024-12-09 23:11:44.114309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:16.962 [2024-12-09 23:11:44.114320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:16.962 [2024-12-09 23:11:44.114335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:16.962 [2024-12-09 23:11:44.114345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:16.962 [2024-12-09 23:11:44.114359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:16.962 [2024-12-09 23:11:44.114369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:16.963 [2024-12-09 23:11:44.114385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:16.963 [2024-12-09 23:11:44.114395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:16.963 [2024-12-09 23:11:44.114407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:16.963 [2024-12-09 23:11:44.114418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:16.963 [2024-12-09 23:11:44.114431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:16.963 [2024-12-09 23:11:44.114441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:16.963 [2024-12-09 23:11:44.114476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.963 [2024-12-09 23:11:44.114488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.963 [2024-12-09 23:11:44.114501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:16.963 [2024-12-09 23:11:44.114513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:16.963 [2024-12-09 23:11:44.114527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:16.963 [2024-12-09 23:11:44.114540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.963 [2024-12-09 23:11:44.114553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:16.963 [2024-12-09 23:11:44.114564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:30:16.963 [2024-12-09 23:11:44.114577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.963 [2024-12-09 23:11:44.114624] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:16.963 [2024-12-09 23:11:44.114643] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:20.257 [2024-12-09 23:11:47.409706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.257 [2024-12-09 23:11:47.409804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:20.257 [2024-12-09 23:11:47.409825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3300.429 ms 00:30:20.257 [2024-12-09 23:11:47.409839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.257 [2024-12-09 23:11:47.457072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.257 [2024-12-09 23:11:47.457146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:20.257 [2024-12-09 23:11:47.457164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.941 ms 00:30:20.257 [2024-12-09 23:11:47.457178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.257 [2024-12-09 23:11:47.457346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.257 [2024-12-09 23:11:47.457364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:20.257 [2024-12-09 23:11:47.457376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:20.257 [2024-12-09 23:11:47.457397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.257 [2024-12-09 23:11:47.508506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.257 [2024-12-09 23:11:47.508578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:20.257 [2024-12-09 23:11:47.508594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.105 ms 00:30:20.257 [2024-12-09 23:11:47.508609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.257 [2024-12-09 23:11:47.508665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.257 [2024-12-09 23:11:47.508683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:20.257 [2024-12-09 23:11:47.508696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:20.258 [2024-12-09 23:11:47.508722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.258 [2024-12-09 23:11:47.509219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.258 [2024-12-09 23:11:47.509239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:20.258 [2024-12-09 23:11:47.509251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:30:20.258 [2024-12-09 23:11:47.509264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.258 [2024-12-09 23:11:47.509369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.258 [2024-12-09 23:11:47.509383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:20.258 [2024-12-09 23:11:47.509398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:30:20.258 [2024-12-09 23:11:47.509414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.258 [2024-12-09 23:11:47.530319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.258 [2024-12-09 23:11:47.530413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:20.258 [2024-12-09 23:11:47.530432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.917 ms 00:30:20.258 [2024-12-09 23:11:47.530446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.258 [2024-12-09 23:11:47.555384] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:20.258 [2024-12-09 23:11:47.560136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.258 [2024-12-09 23:11:47.560192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:20.258 [2024-12-09 23:11:47.560211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.573 ms 00:30:20.258 [2024-12-09 23:11:47.560224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.516 [2024-12-09 23:11:47.649745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.516 [2024-12-09 23:11:47.649825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:20.516 [2024-12-09 23:11:47.649845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.594 ms 00:30:20.516 [2024-12-09 23:11:47.649858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.516 [2024-12-09 23:11:47.650057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.516 [2024-12-09 23:11:47.650074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:20.516 [2024-12-09 23:11:47.650092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:30:20.516 [2024-12-09 23:11:47.650103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.516 [2024-12-09 23:11:47.693331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.516 [2024-12-09 23:11:47.693404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:20.516 [2024-12-09 23:11:47.693427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.200 ms 00:30:20.516 [2024-12-09 23:11:47.693439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.516 [2024-12-09 23:11:47.736631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.516 [2024-12-09 23:11:47.736700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:20.516 [2024-12-09 23:11:47.736722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.177 ms 00:30:20.516 [2024-12-09 23:11:47.736734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.516 [2024-12-09 23:11:47.737565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.516 [2024-12-09 23:11:47.737586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:20.516 [2024-12-09 23:11:47.737601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:30:20.516 [2024-12-09 23:11:47.737628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.775 [2024-12-09 23:11:47.853545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.775 [2024-12-09 23:11:47.853614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:20.775 [2024-12-09 23:11:47.853641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.004 ms 00:30:20.775 [2024-12-09 23:11:47.853652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.775 [2024-12-09 23:11:47.897827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.775 [2024-12-09 23:11:47.897916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:20.775 [2024-12-09 23:11:47.897937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.079 ms 00:30:20.775 [2024-12-09 23:11:47.897949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.775 [2024-12-09 23:11:47.942418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.775 [2024-12-09 23:11:47.942530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:20.775 [2024-12-09 23:11:47.942552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.440 ms 00:30:20.775 [2024-12-09 23:11:47.942564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.775 [2024-12-09 23:11:47.986203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.775 [2024-12-09 23:11:47.986287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:20.775 [2024-12-09 23:11:47.986307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.609 ms 00:30:20.775 [2024-12-09 23:11:47.986334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.775 [2024-12-09 23:11:47.986425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.775 [2024-12-09 23:11:47.986439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:20.775 [2024-12-09 23:11:47.986474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:20.775 [2024-12-09 23:11:47.986485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.775 [2024-12-09 23:11:47.986641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:20.775 [2024-12-09 23:11:47.986661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:20.775 [2024-12-09 23:11:47.986676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:20.775 [2024-12-09 23:11:47.986686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:20.775 [2024-12-09 23:11:47.987907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3904.692 ms, result 0 00:30:20.775 { 00:30:20.775 "name": "ftl0", 00:30:20.775 "uuid": "c2221666-d845-4992-88fe-319b86a6eed4" 00:30:20.775 } 00:30:20.775 23:11:48 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:20.775 23:11:48 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:21.034 23:11:48 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:30:21.034 23:11:48 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:21.294 [2024-12-09 23:11:48.450371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.450474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:21.294 [2024-12-09 23:11:48.450492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:21.294 [2024-12-09 23:11:48.450506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.450539] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:21.294 [2024-12-09 23:11:48.454936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.454982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:21.294 [2024-12-09 23:11:48.454999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.375 ms 00:30:21.294 [2024-12-09 23:11:48.455010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.455287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.455306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:21.294 [2024-12-09 23:11:48.455321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:30:21.294 [2024-12-09 23:11:48.455331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.458049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.458078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:21.294 [2024-12-09 23:11:48.458094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.699 ms 00:30:21.294 [2024-12-09 23:11:48.458105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.463237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.463285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:21.294 [2024-12-09 23:11:48.463306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.109 ms 00:30:21.294 [2024-12-09 23:11:48.463317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.505490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.505589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:21.294 [2024-12-09 23:11:48.505610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.118 ms 00:30:21.294 [2024-12-09 23:11:48.505621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.531438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.531542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:21.294 [2024-12-09 23:11:48.531565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.755 ms 00:30:21.294 [2024-12-09 23:11:48.531578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.531841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.531860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:21.294 [2024-12-09 23:11:48.531875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:30:21.294 [2024-12-09 23:11:48.531887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.574936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.575033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:21.294 [2024-12-09 23:11:48.575054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.078 ms 00:30:21.294 [2024-12-09 23:11:48.575065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.294 [2024-12-09 23:11:48.618127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.294 [2024-12-09 23:11:48.618211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:21.294 [2024-12-09 23:11:48.618231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.046 ms 00:30:21.294 [2024-12-09 23:11:48.618242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.559 [2024-12-09 23:11:48.660024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.560 [2024-12-09 23:11:48.660121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:21.560 [2024-12-09 23:11:48.660141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.749 ms 00:30:21.560 [2024-12-09 23:11:48.660153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.560 [2024-12-09 23:11:48.702133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.560 [2024-12-09 23:11:48.702219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:21.560 [2024-12-09 23:11:48.702239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.851 ms 00:30:21.560 [2024-12-09 23:11:48.702251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.560 [2024-12-09 23:11:48.702336] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:21.560 [2024-12-09 23:11:48.702356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.702744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:21.560 [2024-12-09 23:11:48.703362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.703996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.704008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.704023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.704035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.704050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:21.561 [2024-12-09 23:11:48.704070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:21.561 [2024-12-09 23:11:48.704085] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c2221666-d845-4992-88fe-319b86a6eed4 00:30:21.561 [2024-12-09 23:11:48.704097] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:21.561 [2024-12-09 23:11:48.704114] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:21.561 [2024-12-09 23:11:48.704130] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:21.561 [2024-12-09 23:11:48.704145] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:21.561 [2024-12-09 23:11:48.704156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:21.561 [2024-12-09 23:11:48.704170] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:21.561 [2024-12-09 23:11:48.704181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:21.561 [2024-12-09 23:11:48.704193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:21.561 [2024-12-09 23:11:48.704204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:21.561 [2024-12-09 23:11:48.704219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.561 [2024-12-09 23:11:48.704231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:21.561 [2024-12-09 23:11:48.704246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:30:21.561 [2024-12-09 23:11:48.704261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.561 [2024-12-09 23:11:48.725421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.562 [2024-12-09 23:11:48.725513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:21.562 [2024-12-09 23:11:48.725533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.068 ms 00:30:21.562 [2024-12-09 23:11:48.725544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.562 [2024-12-09 23:11:48.726098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.562 [2024-12-09 23:11:48.726118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:21.562 [2024-12-09 23:11:48.726136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:30:21.562 [2024-12-09 23:11:48.726146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.562 [2024-12-09 23:11:48.793923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.562 [2024-12-09 23:11:48.794004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:21.562 [2024-12-09 23:11:48.794025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.562 [2024-12-09 23:11:48.794036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.562 [2024-12-09 23:11:48.794125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.562 [2024-12-09 23:11:48.794138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:21.562 [2024-12-09 23:11:48.794155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.562 [2024-12-09 23:11:48.794166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.562 [2024-12-09 23:11:48.794292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.562 [2024-12-09 23:11:48.794307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:21.562 [2024-12-09 23:11:48.794321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.562 [2024-12-09 23:11:48.794331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.562 [2024-12-09 23:11:48.794359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.562 [2024-12-09 23:11:48.794370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:21.562 [2024-12-09 23:11:48.794383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.562 [2024-12-09 23:11:48.794396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:48.925281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:48.925364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:21.828 [2024-12-09 23:11:48.925385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:48.925398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.037395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:49.037493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:21.828 [2024-12-09 23:11:49.037513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:49.037528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.037694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:49.037708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:21.828 [2024-12-09 23:11:49.037723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:49.037734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.037820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:49.037833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:21.828 [2024-12-09 23:11:49.037848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:49.037860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.038007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:49.038024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:21.828 [2024-12-09 23:11:49.038038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:49.038050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.038098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:49.038112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:21.828 [2024-12-09 23:11:49.038126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:49.038138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.038190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:49.038202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:21.828 [2024-12-09 23:11:49.038216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:49.038227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.038282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:21.828 [2024-12-09 23:11:49.038295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:21.828 [2024-12-09 23:11:49.038310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:21.828 [2024-12-09 23:11:49.038321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.828 [2024-12-09 23:11:49.038524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 589.020 ms, result 0 00:30:21.828 true 00:30:21.828 23:11:49 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79479 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79479 ']' 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79479 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79479 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79479' 00:30:21.828 killing process with pid 79479 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79479 00:30:21.828 23:11:49 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79479 00:30:28.390 23:11:55 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:32.609 262144+0 records in 00:30:32.609 262144+0 records out 00:30:32.609 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.17086 s, 257 MB/s 00:30:32.609 23:11:59 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:33.984 23:12:01 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:33.984 [2024-12-09 23:12:01.285950] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:30:33.984 [2024-12-09 23:12:01.286088] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79731 ] 00:30:34.243 [2024-12-09 23:12:01.471048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.502 [2024-12-09 23:12:01.608206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.760 [2024-12-09 23:12:02.007870] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:34.760 [2024-12-09 23:12:02.007966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:35.021 [2024-12-09 23:12:02.175272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.175368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:35.021 [2024-12-09 23:12:02.175388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:35.021 [2024-12-09 23:12:02.175400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.175486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.175505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:35.021 [2024-12-09 23:12:02.175517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:35.021 [2024-12-09 23:12:02.175527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.175552] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:35.021 [2024-12-09 23:12:02.176681] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:35.021 [2024-12-09 23:12:02.176714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.176726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:35.021 [2024-12-09 23:12:02.176737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.171 ms 00:30:35.021 [2024-12-09 23:12:02.176752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.178768] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:35.021 [2024-12-09 23:12:02.199294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.199372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:35.021 [2024-12-09 23:12:02.199391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.556 ms 00:30:35.021 [2024-12-09 23:12:02.199403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.199532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.199547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:35.021 [2024-12-09 23:12:02.199558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:35.021 [2024-12-09 23:12:02.199569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.210581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.210655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:35.021 [2024-12-09 23:12:02.210680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.917 ms 00:30:35.021 [2024-12-09 23:12:02.210712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.210845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.210869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:35.021 [2024-12-09 23:12:02.210901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:30:35.021 [2024-12-09 23:12:02.210913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.211010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.211023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:35.021 [2024-12-09 23:12:02.211034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:35.021 [2024-12-09 23:12:02.211044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.021 [2024-12-09 23:12:02.211078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:35.021 [2024-12-09 23:12:02.216388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.021 [2024-12-09 23:12:02.216433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:35.021 [2024-12-09 23:12:02.216461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.325 ms 00:30:35.021 [2024-12-09 23:12:02.216473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.022 [2024-12-09 23:12:02.216520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.022 [2024-12-09 23:12:02.216532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:35.022 [2024-12-09 23:12:02.216543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:35.022 [2024-12-09 23:12:02.216553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.022 [2024-12-09 23:12:02.216602] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:35.022 [2024-12-09 23:12:02.216629] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:35.022 [2024-12-09 23:12:02.216666] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:35.022 [2024-12-09 23:12:02.216690] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:35.022 [2024-12-09 23:12:02.216782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:35.022 [2024-12-09 23:12:02.216796] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:35.022 [2024-12-09 23:12:02.216810] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:35.022 [2024-12-09 23:12:02.216834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:35.022 [2024-12-09 23:12:02.216846] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:35.022 [2024-12-09 23:12:02.216858] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:35.022 [2024-12-09 23:12:02.216868] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:35.022 [2024-12-09 23:12:02.216884] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:35.022 [2024-12-09 23:12:02.216893] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:35.022 [2024-12-09 23:12:02.216904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.022 [2024-12-09 23:12:02.216914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:35.022 [2024-12-09 23:12:02.216925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:30:35.022 [2024-12-09 23:12:02.216935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.022 [2024-12-09 23:12:02.217007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.022 [2024-12-09 23:12:02.217025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:35.022 [2024-12-09 23:12:02.217036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:35.022 [2024-12-09 23:12:02.217046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.022 [2024-12-09 23:12:02.217148] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:35.022 [2024-12-09 23:12:02.217163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:35.022 [2024-12-09 23:12:02.217174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:35.022 [2024-12-09 23:12:02.217204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:35.022 [2024-12-09 23:12:02.217232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:35.022 [2024-12-09 23:12:02.217251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:35.022 [2024-12-09 23:12:02.217262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:35.022 [2024-12-09 23:12:02.217271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:35.022 [2024-12-09 23:12:02.217291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:35.022 [2024-12-09 23:12:02.217301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:35.022 [2024-12-09 23:12:02.217311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:35.022 [2024-12-09 23:12:02.217330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:35.022 [2024-12-09 23:12:02.217359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:35.022 [2024-12-09 23:12:02.217386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:35.022 [2024-12-09 23:12:02.217414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:35.022 [2024-12-09 23:12:02.217441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:35.022 [2024-12-09 23:12:02.217480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:35.022 [2024-12-09 23:12:02.217499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:35.022 [2024-12-09 23:12:02.217508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:35.022 [2024-12-09 23:12:02.217518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:35.022 [2024-12-09 23:12:02.217527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:35.022 [2024-12-09 23:12:02.217538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:35.022 [2024-12-09 23:12:02.217547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:35.022 [2024-12-09 23:12:02.217565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:35.022 [2024-12-09 23:12:02.217575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:35.022 [2024-12-09 23:12:02.217598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:35.022 [2024-12-09 23:12:02.217608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:35.022 [2024-12-09 23:12:02.217629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:35.022 [2024-12-09 23:12:02.217640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:35.022 [2024-12-09 23:12:02.217649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:35.022 [2024-12-09 23:12:02.217659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:35.022 [2024-12-09 23:12:02.217668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:35.022 [2024-12-09 23:12:02.217677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:35.022 [2024-12-09 23:12:02.217688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:35.022 [2024-12-09 23:12:02.217700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:35.022 [2024-12-09 23:12:02.217718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:35.022 [2024-12-09 23:12:02.217729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:35.022 [2024-12-09 23:12:02.217739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:35.022 [2024-12-09 23:12:02.217749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:35.022 [2024-12-09 23:12:02.217760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:35.022 [2024-12-09 23:12:02.217770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:35.023 [2024-12-09 23:12:02.217781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:35.023 [2024-12-09 23:12:02.217791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:35.023 [2024-12-09 23:12:02.217802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:35.023 [2024-12-09 23:12:02.217812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:35.023 [2024-12-09 23:12:02.217822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:35.023 [2024-12-09 23:12:02.217832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:35.023 [2024-12-09 23:12:02.217842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:35.023 [2024-12-09 23:12:02.217852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:35.023 [2024-12-09 23:12:02.217862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:35.023 [2024-12-09 23:12:02.217874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:35.023 [2024-12-09 23:12:02.217885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:35.023 [2024-12-09 23:12:02.217895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:35.023 [2024-12-09 23:12:02.217905] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:35.023 [2024-12-09 23:12:02.217915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:35.023 [2024-12-09 23:12:02.217927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.217938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:35.023 [2024-12-09 23:12:02.217948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:30:35.023 [2024-12-09 23:12:02.217959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.023 [2024-12-09 23:12:02.265298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.265371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:35.023 [2024-12-09 23:12:02.265389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.359 ms 00:30:35.023 [2024-12-09 23:12:02.265406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.023 [2024-12-09 23:12:02.265546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.265559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:35.023 [2024-12-09 23:12:02.265570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:35.023 [2024-12-09 23:12:02.265580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.023 [2024-12-09 23:12:02.324659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.324740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:35.023 [2024-12-09 23:12:02.324756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.077 ms 00:30:35.023 [2024-12-09 23:12:02.324767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.023 [2024-12-09 23:12:02.324840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.324852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:35.023 [2024-12-09 23:12:02.324868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:35.023 [2024-12-09 23:12:02.324879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.023 [2024-12-09 23:12:02.325402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.325426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:35.023 [2024-12-09 23:12:02.325437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:30:35.023 [2024-12-09 23:12:02.325447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.023 [2024-12-09 23:12:02.325601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.325616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:35.023 [2024-12-09 23:12:02.325633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:30:35.023 [2024-12-09 23:12:02.325644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.023 [2024-12-09 23:12:02.345529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.023 [2024-12-09 23:12:02.345618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:35.023 [2024-12-09 23:12:02.345638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.891 ms 00:30:35.023 [2024-12-09 23:12:02.345652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.367098] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:35.283 [2024-12-09 23:12:02.367192] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:35.283 [2024-12-09 23:12:02.367212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.367224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:35.283 [2024-12-09 23:12:02.367238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.423 ms 00:30:35.283 [2024-12-09 23:12:02.367248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.398787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.398904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:35.283 [2024-12-09 23:12:02.398922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.500 ms 00:30:35.283 [2024-12-09 23:12:02.398933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.419139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.419223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:35.283 [2024-12-09 23:12:02.419241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.172 ms 00:30:35.283 [2024-12-09 23:12:02.419251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.438857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.438943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:35.283 [2024-12-09 23:12:02.438959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.560 ms 00:30:35.283 [2024-12-09 23:12:02.438970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.439875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.439913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:35.283 [2024-12-09 23:12:02.439926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:30:35.283 [2024-12-09 23:12:02.439945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.534545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.534639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:35.283 [2024-12-09 23:12:02.534658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.719 ms 00:30:35.283 [2024-12-09 23:12:02.534680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.548626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:35.283 [2024-12-09 23:12:02.552273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.552335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:35.283 [2024-12-09 23:12:02.552353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.532 ms 00:30:35.283 [2024-12-09 23:12:02.552365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.552497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.552513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:35.283 [2024-12-09 23:12:02.552525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:35.283 [2024-12-09 23:12:02.552536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.552657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.552670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:35.283 [2024-12-09 23:12:02.552682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:35.283 [2024-12-09 23:12:02.552692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.552719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.552731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:35.283 [2024-12-09 23:12:02.552742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:35.283 [2024-12-09 23:12:02.552753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.552791] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:35.283 [2024-12-09 23:12:02.552807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.552817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:35.283 [2024-12-09 23:12:02.552829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:35.283 [2024-12-09 23:12:02.552841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.593313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.593401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:35.283 [2024-12-09 23:12:02.593420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.509 ms 00:30:35.283 [2024-12-09 23:12:02.593442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.593564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.283 [2024-12-09 23:12:02.593577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:35.283 [2024-12-09 23:12:02.593589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:30:35.283 [2024-12-09 23:12:02.593599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.283 [2024-12-09 23:12:02.595209] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.037 ms, result 0 00:30:36.660  [2024-12-09T23:12:04.932Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-09T23:12:05.868Z] Copying: 55/1024 [MB] (29 MBps) [2024-12-09T23:12:06.804Z] Copying: 81/1024 [MB] (25 MBps) [2024-12-09T23:12:07.739Z] Copying: 107/1024 [MB] (25 MBps) [2024-12-09T23:12:08.675Z] Copying: 133/1024 [MB] (25 MBps) [2024-12-09T23:12:09.609Z] Copying: 158/1024 [MB] (25 MBps) [2024-12-09T23:12:10.983Z] Copying: 184/1024 [MB] (25 MBps) [2024-12-09T23:12:11.919Z] Copying: 209/1024 [MB] (25 MBps) [2024-12-09T23:12:12.857Z] Copying: 238/1024 [MB] (29 MBps) [2024-12-09T23:12:13.794Z] Copying: 268/1024 [MB] (29 MBps) [2024-12-09T23:12:14.731Z] Copying: 297/1024 [MB] (29 MBps) [2024-12-09T23:12:15.665Z] Copying: 327/1024 [MB] (29 MBps) [2024-12-09T23:12:16.611Z] Copying: 357/1024 [MB] (30 MBps) [2024-12-09T23:12:18.002Z] Copying: 384/1024 [MB] (26 MBps) [2024-12-09T23:12:18.939Z] Copying: 410/1024 [MB] (26 MBps) [2024-12-09T23:12:19.874Z] Copying: 436/1024 [MB] (26 MBps) [2024-12-09T23:12:20.809Z] Copying: 462/1024 [MB] (26 MBps) [2024-12-09T23:12:21.782Z] Copying: 489/1024 [MB] (27 MBps) [2024-12-09T23:12:22.719Z] Copying: 518/1024 [MB] (28 MBps) [2024-12-09T23:12:23.660Z] Copying: 548/1024 [MB] (30 MBps) [2024-12-09T23:12:24.593Z] Copying: 574/1024 [MB] (25 MBps) [2024-12-09T23:12:25.976Z] Copying: 611/1024 [MB] (36 MBps) [2024-12-09T23:12:26.913Z] Copying: 641/1024 [MB] (29 MBps) [2024-12-09T23:12:27.850Z] Copying: 666/1024 [MB] (25 MBps) [2024-12-09T23:12:28.786Z] Copying: 692/1024 [MB] (25 MBps) [2024-12-09T23:12:29.722Z] Copying: 717/1024 [MB] (24 MBps) [2024-12-09T23:12:30.658Z] Copying: 742/1024 [MB] (25 MBps) [2024-12-09T23:12:31.636Z] Copying: 767/1024 [MB] (24 MBps) [2024-12-09T23:12:32.576Z] Copying: 791/1024 [MB] (24 MBps) [2024-12-09T23:12:33.953Z] Copying: 815/1024 [MB] (24 MBps) [2024-12-09T23:12:34.958Z] Copying: 840/1024 [MB] (24 MBps) [2024-12-09T23:12:35.906Z] Copying: 867/1024 [MB] (26 MBps) [2024-12-09T23:12:36.845Z] Copying: 902/1024 [MB] (35 MBps) [2024-12-09T23:12:37.780Z] Copying: 928/1024 [MB] (25 MBps) [2024-12-09T23:12:38.715Z] Copying: 953/1024 [MB] (25 MBps) [2024-12-09T23:12:39.647Z] Copying: 979/1024 [MB] (25 MBps) [2024-12-09T23:12:39.906Z] Copying: 1013/1024 [MB] (34 MBps) [2024-12-09T23:12:39.906Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-09 23:12:39.814889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.570 [2024-12-09 23:12:39.814969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:12.570 [2024-12-09 23:12:39.815000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:12.570 [2024-12-09 23:12:39.815023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.570 [2024-12-09 23:12:39.815070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:12.570 [2024-12-09 23:12:39.819521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.571 [2024-12-09 23:12:39.819593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:12.571 [2024-12-09 23:12:39.819638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.422 ms 00:31:12.571 [2024-12-09 23:12:39.819658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.571 [2024-12-09 23:12:39.820999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.571 [2024-12-09 23:12:39.821060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:12.571 [2024-12-09 23:12:39.821086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:31:12.571 [2024-12-09 23:12:39.821107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.571 [2024-12-09 23:12:39.834595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.571 [2024-12-09 23:12:39.834698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:12.571 [2024-12-09 23:12:39.834729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.468 ms 00:31:12.571 [2024-12-09 23:12:39.834749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.571 [2024-12-09 23:12:39.840536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.571 [2024-12-09 23:12:39.840624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:12.571 [2024-12-09 23:12:39.840675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.639 ms 00:31:12.571 [2024-12-09 23:12:39.840696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.571 [2024-12-09 23:12:39.884707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.571 [2024-12-09 23:12:39.884810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:12.571 [2024-12-09 23:12:39.884845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.939 ms 00:31:12.571 [2024-12-09 23:12:39.884867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.830 [2024-12-09 23:12:39.909214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.830 [2024-12-09 23:12:39.909321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:12.830 [2024-12-09 23:12:39.909356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.183 ms 00:31:12.830 [2024-12-09 23:12:39.909378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.830 [2024-12-09 23:12:39.909711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.830 [2024-12-09 23:12:39.909764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:12.830 [2024-12-09 23:12:39.909788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:31:12.830 [2024-12-09 23:12:39.909810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.830 [2024-12-09 23:12:39.954399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.830 [2024-12-09 23:12:39.954523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:12.830 [2024-12-09 23:12:39.954560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.619 ms 00:31:12.830 [2024-12-09 23:12:39.954581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.830 [2024-12-09 23:12:39.998650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.830 [2024-12-09 23:12:39.998754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:12.830 [2024-12-09 23:12:39.998788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.988 ms 00:31:12.830 [2024-12-09 23:12:39.998811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.830 [2024-12-09 23:12:40.040593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.830 [2024-12-09 23:12:40.040710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:12.830 [2024-12-09 23:12:40.040740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.678 ms 00:31:12.830 [2024-12-09 23:12:40.040764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.830 [2024-12-09 23:12:40.081644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.830 [2024-12-09 23:12:40.081761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:12.830 [2024-12-09 23:12:40.081791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.633 ms 00:31:12.830 [2024-12-09 23:12:40.081810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.830 [2024-12-09 23:12:40.081929] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:12.830 [2024-12-09 23:12:40.081961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.082992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:12.830 [2024-12-09 23:12:40.083578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.083987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:12.831 [2024-12-09 23:12:40.084560] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:12.831 [2024-12-09 23:12:40.084599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c2221666-d845-4992-88fe-319b86a6eed4 00:31:12.831 [2024-12-09 23:12:40.084623] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:12.831 [2024-12-09 23:12:40.084645] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:12.831 [2024-12-09 23:12:40.084667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:12.831 [2024-12-09 23:12:40.084691] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:12.831 [2024-12-09 23:12:40.084712] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:12.831 [2024-12-09 23:12:40.084754] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:12.831 [2024-12-09 23:12:40.084777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:12.831 [2024-12-09 23:12:40.084799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:12.831 [2024-12-09 23:12:40.084819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:12.831 [2024-12-09 23:12:40.084849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.831 [2024-12-09 23:12:40.084874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:12.831 [2024-12-09 23:12:40.084900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:31:12.831 [2024-12-09 23:12:40.084922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.831 [2024-12-09 23:12:40.107154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.831 [2024-12-09 23:12:40.107239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:12.831 [2024-12-09 23:12:40.107259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.144 ms 00:31:12.831 [2024-12-09 23:12:40.107273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.831 [2024-12-09 23:12:40.107852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:12.831 [2024-12-09 23:12:40.107875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:12.831 [2024-12-09 23:12:40.107891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:31:12.831 [2024-12-09 23:12:40.107920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.831 [2024-12-09 23:12:40.163624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.831 [2024-12-09 23:12:40.163748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:12.831 [2024-12-09 23:12:40.163776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.831 [2024-12-09 23:12:40.163789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.831 [2024-12-09 23:12:40.163891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.831 [2024-12-09 23:12:40.163906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:12.831 [2024-12-09 23:12:40.163919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.831 [2024-12-09 23:12:40.163941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.831 [2024-12-09 23:12:40.164084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.831 [2024-12-09 23:12:40.164102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:12.831 [2024-12-09 23:12:40.164115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.831 [2024-12-09 23:12:40.164128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.831 [2024-12-09 23:12:40.164151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.831 [2024-12-09 23:12:40.164172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:12.831 [2024-12-09 23:12:40.164186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.831 [2024-12-09 23:12:40.164199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.089 [2024-12-09 23:12:40.301822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.089 [2024-12-09 23:12:40.301932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:13.089 [2024-12-09 23:12:40.301952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.089 [2024-12-09 23:12:40.301965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.089 [2024-12-09 23:12:40.422936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.089 [2024-12-09 23:12:40.423032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:13.089 [2024-12-09 23:12:40.423053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.089 [2024-12-09 23:12:40.423094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.089 [2024-12-09 23:12:40.423214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.089 [2024-12-09 23:12:40.423230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:13.089 [2024-12-09 23:12:40.423245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.089 [2024-12-09 23:12:40.423258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.089 [2024-12-09 23:12:40.423318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.089 [2024-12-09 23:12:40.423333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:13.089 [2024-12-09 23:12:40.423346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.089 [2024-12-09 23:12:40.423359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.089 [2024-12-09 23:12:40.423539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.089 [2024-12-09 23:12:40.423557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:13.089 [2024-12-09 23:12:40.423571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.089 [2024-12-09 23:12:40.423584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.089 [2024-12-09 23:12:40.423638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.089 [2024-12-09 23:12:40.423654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:13.089 [2024-12-09 23:12:40.423667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.089 [2024-12-09 23:12:40.423679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.089 [2024-12-09 23:12:40.423725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.089 [2024-12-09 23:12:40.423750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:13.089 [2024-12-09 23:12:40.423764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.089 [2024-12-09 23:12:40.423776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.347 [2024-12-09 23:12:40.423828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:13.347 [2024-12-09 23:12:40.423843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:13.347 [2024-12-09 23:12:40.423856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:13.347 [2024-12-09 23:12:40.423869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:13.347 [2024-12-09 23:12:40.424064] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 610.111 ms, result 0 00:31:14.744 00:31:14.744 00:31:14.744 23:12:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:14.744 [2024-12-09 23:12:41.780152] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:31:14.744 [2024-12-09 23:12:41.780305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80134 ] 00:31:14.744 [2024-12-09 23:12:41.962572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:15.003 [2024-12-09 23:12:42.093940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:15.261 [2024-12-09 23:12:42.493977] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.261 [2024-12-09 23:12:42.494090] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.520 [2024-12-09 23:12:42.659069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.520 [2024-12-09 23:12:42.659169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:15.520 [2024-12-09 23:12:42.659190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:15.520 [2024-12-09 23:12:42.659204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.520 [2024-12-09 23:12:42.659283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.520 [2024-12-09 23:12:42.659302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:15.520 [2024-12-09 23:12:42.659316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:15.520 [2024-12-09 23:12:42.659328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.520 [2024-12-09 23:12:42.659358] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:15.520 [2024-12-09 23:12:42.660439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:15.520 [2024-12-09 23:12:42.660495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.520 [2024-12-09 23:12:42.660509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:15.520 [2024-12-09 23:12:42.660524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:31:15.520 [2024-12-09 23:12:42.660537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.520 [2024-12-09 23:12:42.662566] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:15.520 [2024-12-09 23:12:42.683547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.520 [2024-12-09 23:12:42.683631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:15.520 [2024-12-09 23:12:42.683652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.013 ms 00:31:15.520 [2024-12-09 23:12:42.683665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.520 [2024-12-09 23:12:42.683790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.520 [2024-12-09 23:12:42.683805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:15.520 [2024-12-09 23:12:42.683819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:15.520 [2024-12-09 23:12:42.683832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.520 [2024-12-09 23:12:42.694846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.521 [2024-12-09 23:12:42.694909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:15.521 [2024-12-09 23:12:42.694927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.912 ms 00:31:15.521 [2024-12-09 23:12:42.694946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.521 [2024-12-09 23:12:42.695048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.521 [2024-12-09 23:12:42.695067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:15.521 [2024-12-09 23:12:42.695080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:15.521 [2024-12-09 23:12:42.695092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.521 [2024-12-09 23:12:42.695186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.521 [2024-12-09 23:12:42.695201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:15.521 [2024-12-09 23:12:42.695213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:15.521 [2024-12-09 23:12:42.695226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.521 [2024-12-09 23:12:42.695266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:15.521 [2024-12-09 23:12:42.700197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.521 [2024-12-09 23:12:42.700247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:15.521 [2024-12-09 23:12:42.700269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.948 ms 00:31:15.521 [2024-12-09 23:12:42.700281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.521 [2024-12-09 23:12:42.700327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.521 [2024-12-09 23:12:42.700341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:15.521 [2024-12-09 23:12:42.700355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:15.521 [2024-12-09 23:12:42.700368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.521 [2024-12-09 23:12:42.700419] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:15.521 [2024-12-09 23:12:42.700463] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:15.521 [2024-12-09 23:12:42.700503] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:15.521 [2024-12-09 23:12:42.700527] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:15.521 [2024-12-09 23:12:42.700620] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:15.521 [2024-12-09 23:12:42.700636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:15.521 [2024-12-09 23:12:42.700651] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:15.521 [2024-12-09 23:12:42.700666] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:15.521 [2024-12-09 23:12:42.700681] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:15.521 [2024-12-09 23:12:42.700694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:15.521 [2024-12-09 23:12:42.700706] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:15.521 [2024-12-09 23:12:42.700722] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:15.521 [2024-12-09 23:12:42.700734] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:15.521 [2024-12-09 23:12:42.700748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.521 [2024-12-09 23:12:42.700760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:15.521 [2024-12-09 23:12:42.700773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:31:15.521 [2024-12-09 23:12:42.700784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.521 [2024-12-09 23:12:42.700863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.521 [2024-12-09 23:12:42.700877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:15.521 [2024-12-09 23:12:42.700889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:15.521 [2024-12-09 23:12:42.700900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.521 [2024-12-09 23:12:42.701008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:15.521 [2024-12-09 23:12:42.701035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:15.521 [2024-12-09 23:12:42.701049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:15.521 [2024-12-09 23:12:42.701084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:15.521 [2024-12-09 23:12:42.701119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.521 [2024-12-09 23:12:42.701141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:15.521 [2024-12-09 23:12:42.701153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:15.521 [2024-12-09 23:12:42.701164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.521 [2024-12-09 23:12:42.701188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:15.521 [2024-12-09 23:12:42.701200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:15.521 [2024-12-09 23:12:42.701211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:15.521 [2024-12-09 23:12:42.701233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:15.521 [2024-12-09 23:12:42.701266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:15.521 [2024-12-09 23:12:42.701298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:15.521 [2024-12-09 23:12:42.701331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:15.521 [2024-12-09 23:12:42.701363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:15.521 [2024-12-09 23:12:42.701396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.521 [2024-12-09 23:12:42.701418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:15.521 [2024-12-09 23:12:42.701429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:15.521 [2024-12-09 23:12:42.701439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.521 [2024-12-09 23:12:42.701465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:15.521 [2024-12-09 23:12:42.701477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:15.521 [2024-12-09 23:12:42.701488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:15.521 [2024-12-09 23:12:42.701511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:15.521 [2024-12-09 23:12:42.701522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701534] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:15.521 [2024-12-09 23:12:42.701547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:15.521 [2024-12-09 23:12:42.701558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.521 [2024-12-09 23:12:42.701581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:15.521 [2024-12-09 23:12:42.701593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:15.521 [2024-12-09 23:12:42.701604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:15.521 [2024-12-09 23:12:42.701615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:15.521 [2024-12-09 23:12:42.701626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:15.521 [2024-12-09 23:12:42.701637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:15.521 [2024-12-09 23:12:42.701650] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:15.521 [2024-12-09 23:12:42.701664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.521 [2024-12-09 23:12:42.701682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:15.521 [2024-12-09 23:12:42.701695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:15.521 [2024-12-09 23:12:42.701708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:15.521 [2024-12-09 23:12:42.701720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:15.521 [2024-12-09 23:12:42.701732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:15.521 [2024-12-09 23:12:42.701744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:15.521 [2024-12-09 23:12:42.701756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:15.521 [2024-12-09 23:12:42.701768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:15.521 [2024-12-09 23:12:42.701780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:15.522 [2024-12-09 23:12:42.701791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:15.522 [2024-12-09 23:12:42.701803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:15.522 [2024-12-09 23:12:42.701815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:15.522 [2024-12-09 23:12:42.701827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:15.522 [2024-12-09 23:12:42.701839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:15.522 [2024-12-09 23:12:42.701851] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:15.522 [2024-12-09 23:12:42.701863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.522 [2024-12-09 23:12:42.701877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:15.522 [2024-12-09 23:12:42.701889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:15.522 [2024-12-09 23:12:42.701901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:15.522 [2024-12-09 23:12:42.701913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:15.522 [2024-12-09 23:12:42.701927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.701940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:15.522 [2024-12-09 23:12:42.701952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:31:15.522 [2024-12-09 23:12:42.701964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.749180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.749265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:15.522 [2024-12-09 23:12:42.749284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.230 ms 00:31:15.522 [2024-12-09 23:12:42.749304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.749416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.749430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:15.522 [2024-12-09 23:12:42.749443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:15.522 [2024-12-09 23:12:42.749477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.808535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.808620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:15.522 [2024-12-09 23:12:42.808638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.043 ms 00:31:15.522 [2024-12-09 23:12:42.808651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.808729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.808743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:15.522 [2024-12-09 23:12:42.808763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:15.522 [2024-12-09 23:12:42.808775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.809687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.809719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:15.522 [2024-12-09 23:12:42.809733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:31:15.522 [2024-12-09 23:12:42.809746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.809884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.809902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:15.522 [2024-12-09 23:12:42.809923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:31:15.522 [2024-12-09 23:12:42.809935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.830411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.830525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:15.522 [2024-12-09 23:12:42.830546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.480 ms 00:31:15.522 [2024-12-09 23:12:42.830559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.522 [2024-12-09 23:12:42.851464] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:15.522 [2024-12-09 23:12:42.851555] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:15.522 [2024-12-09 23:12:42.851578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.522 [2024-12-09 23:12:42.851593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:15.522 [2024-12-09 23:12:42.851610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.864 ms 00:31:15.522 [2024-12-09 23:12:42.851622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.780 [2024-12-09 23:12:42.882982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.780 [2024-12-09 23:12:42.883091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:15.780 [2024-12-09 23:12:42.883114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.309 ms 00:31:15.780 [2024-12-09 23:12:42.883130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.780 [2024-12-09 23:12:42.903242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.780 [2024-12-09 23:12:42.903343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:15.780 [2024-12-09 23:12:42.903364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.998 ms 00:31:15.780 [2024-12-09 23:12:42.903378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:42.922726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:42.922817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:15.781 [2024-12-09 23:12:42.922839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.276 ms 00:31:15.781 [2024-12-09 23:12:42.922851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:42.923733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:42.923777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:15.781 [2024-12-09 23:12:42.923798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:31:15.781 [2024-12-09 23:12:42.923810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.014928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.015046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:15.781 [2024-12-09 23:12:43.015084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.228 ms 00:31:15.781 [2024-12-09 23:12:43.015097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.029011] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:15.781 [2024-12-09 23:12:43.032446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.032539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:15.781 [2024-12-09 23:12:43.032574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.269 ms 00:31:15.781 [2024-12-09 23:12:43.032588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.032763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.032780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:15.781 [2024-12-09 23:12:43.032800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:15.781 [2024-12-09 23:12:43.032812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.032899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.032914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:15.781 [2024-12-09 23:12:43.032928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:15.781 [2024-12-09 23:12:43.032940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.032969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.032982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:15.781 [2024-12-09 23:12:43.032994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:15.781 [2024-12-09 23:12:43.033007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.033051] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:15.781 [2024-12-09 23:12:43.033066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.033078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:15.781 [2024-12-09 23:12:43.033091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:15.781 [2024-12-09 23:12:43.033103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.072318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.072406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:15.781 [2024-12-09 23:12:43.072441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.246 ms 00:31:15.781 [2024-12-09 23:12:43.073222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.073336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.781 [2024-12-09 23:12:43.073353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:15.781 [2024-12-09 23:12:43.073368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:15.781 [2024-12-09 23:12:43.073381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.781 [2024-12-09 23:12:43.074715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 415.771 ms, result 0 00:31:17.154  [2024-12-09T23:12:45.458Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-09T23:12:46.396Z] Copying: 49/1024 [MB] (24 MBps) [2024-12-09T23:12:47.332Z] Copying: 74/1024 [MB] (24 MBps) [2024-12-09T23:12:48.705Z] Copying: 99/1024 [MB] (24 MBps) [2024-12-09T23:12:49.640Z] Copying: 123/1024 [MB] (24 MBps) [2024-12-09T23:12:50.599Z] Copying: 147/1024 [MB] (24 MBps) [2024-12-09T23:12:51.536Z] Copying: 172/1024 [MB] (24 MBps) [2024-12-09T23:12:52.469Z] Copying: 197/1024 [MB] (24 MBps) [2024-12-09T23:12:53.401Z] Copying: 222/1024 [MB] (25 MBps) [2024-12-09T23:12:54.341Z] Copying: 248/1024 [MB] (26 MBps) [2024-12-09T23:12:55.794Z] Copying: 276/1024 [MB] (28 MBps) [2024-12-09T23:12:56.372Z] Copying: 305/1024 [MB] (28 MBps) [2024-12-09T23:12:57.305Z] Copying: 333/1024 [MB] (28 MBps) [2024-12-09T23:12:58.679Z] Copying: 363/1024 [MB] (30 MBps) [2024-12-09T23:12:59.613Z] Copying: 395/1024 [MB] (31 MBps) [2024-12-09T23:13:00.548Z] Copying: 430/1024 [MB] (35 MBps) [2024-12-09T23:13:01.481Z] Copying: 464/1024 [MB] (33 MBps) [2024-12-09T23:13:02.416Z] Copying: 494/1024 [MB] (30 MBps) [2024-12-09T23:13:03.357Z] Copying: 523/1024 [MB] (28 MBps) [2024-12-09T23:13:04.333Z] Copying: 551/1024 [MB] (28 MBps) [2024-12-09T23:13:05.707Z] Copying: 578/1024 [MB] (27 MBps) [2024-12-09T23:13:06.640Z] Copying: 605/1024 [MB] (26 MBps) [2024-12-09T23:13:07.300Z] Copying: 632/1024 [MB] (26 MBps) [2024-12-09T23:13:08.673Z] Copying: 660/1024 [MB] (27 MBps) [2024-12-09T23:13:09.609Z] Copying: 687/1024 [MB] (27 MBps) [2024-12-09T23:13:10.550Z] Copying: 715/1024 [MB] (27 MBps) [2024-12-09T23:13:11.487Z] Copying: 747/1024 [MB] (31 MBps) [2024-12-09T23:13:12.424Z] Copying: 777/1024 [MB] (30 MBps) [2024-12-09T23:13:13.358Z] Copying: 805/1024 [MB] (28 MBps) [2024-12-09T23:13:14.298Z] Copying: 834/1024 [MB] (29 MBps) [2024-12-09T23:13:15.669Z] Copying: 862/1024 [MB] (27 MBps) [2024-12-09T23:13:16.599Z] Copying: 889/1024 [MB] (27 MBps) [2024-12-09T23:13:17.538Z] Copying: 917/1024 [MB] (27 MBps) [2024-12-09T23:13:18.470Z] Copying: 945/1024 [MB] (27 MBps) [2024-12-09T23:13:19.403Z] Copying: 974/1024 [MB] (29 MBps) [2024-12-09T23:13:20.341Z] Copying: 1003/1024 [MB] (28 MBps) [2024-12-09T23:13:21.331Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-09 23:13:21.146932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.995 [2024-12-09 23:13:21.147006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:53.995 [2024-12-09 23:13:21.147026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:53.995 [2024-12-09 23:13:21.147038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.995 [2024-12-09 23:13:21.147249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:53.995 [2024-12-09 23:13:21.151726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.995 [2024-12-09 23:13:21.151792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:53.995 [2024-12-09 23:13:21.151808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.461 ms 00:31:53.995 [2024-12-09 23:13:21.151820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.995 [2024-12-09 23:13:21.152088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.995 [2024-12-09 23:13:21.152111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:53.995 [2024-12-09 23:13:21.152123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:31:53.995 [2024-12-09 23:13:21.152134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.995 [2024-12-09 23:13:21.155288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.995 [2024-12-09 23:13:21.155322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:53.995 [2024-12-09 23:13:21.155334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:31:53.995 [2024-12-09 23:13:21.155351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.995 [2024-12-09 23:13:21.161277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.995 [2024-12-09 23:13:21.161334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:53.995 [2024-12-09 23:13:21.161349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.910 ms 00:31:53.995 [2024-12-09 23:13:21.161360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.995 [2024-12-09 23:13:21.206430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.995 [2024-12-09 23:13:21.206552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:53.995 [2024-12-09 23:13:21.206572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.025 ms 00:31:53.995 [2024-12-09 23:13:21.206584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.995 [2024-12-09 23:13:21.230907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.995 [2024-12-09 23:13:21.230995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:53.995 [2024-12-09 23:13:21.231014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.262 ms 00:31:53.995 [2024-12-09 23:13:21.231026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.996 [2024-12-09 23:13:21.231248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.996 [2024-12-09 23:13:21.231265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:53.996 [2024-12-09 23:13:21.231278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:31:53.996 [2024-12-09 23:13:21.231290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.996 [2024-12-09 23:13:21.275607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.996 [2024-12-09 23:13:21.275676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:53.996 [2024-12-09 23:13:21.275694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.365 ms 00:31:53.996 [2024-12-09 23:13:21.275706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:53.996 [2024-12-09 23:13:21.319030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:53.996 [2024-12-09 23:13:21.319121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:53.996 [2024-12-09 23:13:21.319141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.312 ms 00:31:53.996 [2024-12-09 23:13:21.319154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.255 [2024-12-09 23:13:21.361937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.255 [2024-12-09 23:13:21.362261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:54.255 [2024-12-09 23:13:21.362292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.751 ms 00:31:54.255 [2024-12-09 23:13:21.362304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.255 [2024-12-09 23:13:21.404463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.255 [2024-12-09 23:13:21.404717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:54.255 [2024-12-09 23:13:21.404748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.028 ms 00:31:54.255 [2024-12-09 23:13:21.404760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.255 [2024-12-09 23:13:21.404831] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:54.255 [2024-12-09 23:13:21.404862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.404990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:54.255 [2024-12-09 23:13:21.405199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.405989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.406001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.406012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.406023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.406035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.406047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:54.256 [2024-12-09 23:13:21.406068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:54.256 [2024-12-09 23:13:21.406079] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c2221666-d845-4992-88fe-319b86a6eed4 00:31:54.256 [2024-12-09 23:13:21.406092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:54.256 [2024-12-09 23:13:21.406103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:54.256 [2024-12-09 23:13:21.406114] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:54.256 [2024-12-09 23:13:21.406126] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:54.256 [2024-12-09 23:13:21.406150] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:54.256 [2024-12-09 23:13:21.406162] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:54.256 [2024-12-09 23:13:21.406173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:54.256 [2024-12-09 23:13:21.406183] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:54.256 [2024-12-09 23:13:21.406193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:54.256 [2024-12-09 23:13:21.406205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.256 [2024-12-09 23:13:21.406217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:54.256 [2024-12-09 23:13:21.406229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.377 ms 00:31:54.256 [2024-12-09 23:13:21.406244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.256 [2024-12-09 23:13:21.428181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.256 [2024-12-09 23:13:21.428482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:54.256 [2024-12-09 23:13:21.428527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.903 ms 00:31:54.256 [2024-12-09 23:13:21.428539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.256 [2024-12-09 23:13:21.429203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.256 [2024-12-09 23:13:21.429221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:54.256 [2024-12-09 23:13:21.429253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:31:54.256 [2024-12-09 23:13:21.429272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.256 [2024-12-09 23:13:21.486639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.257 [2024-12-09 23:13:21.486723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:54.257 [2024-12-09 23:13:21.486741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.257 [2024-12-09 23:13:21.486753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.257 [2024-12-09 23:13:21.486846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.257 [2024-12-09 23:13:21.486858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:54.257 [2024-12-09 23:13:21.486876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.257 [2024-12-09 23:13:21.486887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.257 [2024-12-09 23:13:21.487016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.257 [2024-12-09 23:13:21.487031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:54.257 [2024-12-09 23:13:21.487043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.257 [2024-12-09 23:13:21.487054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.257 [2024-12-09 23:13:21.487074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.257 [2024-12-09 23:13:21.487087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:54.257 [2024-12-09 23:13:21.487098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.257 [2024-12-09 23:13:21.487114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.514 [2024-12-09 23:13:21.625964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.514 [2024-12-09 23:13:21.626083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:54.514 [2024-12-09 23:13:21.626102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.514 [2024-12-09 23:13:21.626114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.514 [2024-12-09 23:13:21.745110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.514 [2024-12-09 23:13:21.745194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:54.514 [2024-12-09 23:13:21.745223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.514 [2024-12-09 23:13:21.745235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.514 [2024-12-09 23:13:21.745343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.514 [2024-12-09 23:13:21.745357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:54.514 [2024-12-09 23:13:21.745369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.514 [2024-12-09 23:13:21.745380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.514 [2024-12-09 23:13:21.745427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.514 [2024-12-09 23:13:21.745440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:54.514 [2024-12-09 23:13:21.745482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.515 [2024-12-09 23:13:21.745494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.515 [2024-12-09 23:13:21.745633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.515 [2024-12-09 23:13:21.745648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:54.515 [2024-12-09 23:13:21.745659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.515 [2024-12-09 23:13:21.745672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.515 [2024-12-09 23:13:21.745712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.515 [2024-12-09 23:13:21.745725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:54.515 [2024-12-09 23:13:21.745737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.515 [2024-12-09 23:13:21.745748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.515 [2024-12-09 23:13:21.745797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.515 [2024-12-09 23:13:21.745809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:54.515 [2024-12-09 23:13:21.745821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.515 [2024-12-09 23:13:21.745831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.515 [2024-12-09 23:13:21.745877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.515 [2024-12-09 23:13:21.745891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:54.515 [2024-12-09 23:13:21.745902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.515 [2024-12-09 23:13:21.745913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.515 [2024-12-09 23:13:21.746049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 600.059 ms, result 0 00:31:55.896 00:31:55.896 00:31:55.896 23:13:22 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:57.797 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:57.797 23:13:24 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:57.797 [2024-12-09 23:13:24.941079] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:31:57.797 [2024-12-09 23:13:24.941328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80559 ] 00:31:58.059 [2024-12-09 23:13:25.144734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.059 [2024-12-09 23:13:25.278193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.317 [2024-12-09 23:13:25.643216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.317 [2024-12-09 23:13:25.643313] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:58.577 [2024-12-09 23:13:25.805892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.577 [2024-12-09 23:13:25.806238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:58.577 [2024-12-09 23:13:25.806274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:58.577 [2024-12-09 23:13:25.806289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.577 [2024-12-09 23:13:25.806389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.577 [2024-12-09 23:13:25.806408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:58.577 [2024-12-09 23:13:25.806423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:58.577 [2024-12-09 23:13:25.806439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.577 [2024-12-09 23:13:25.806512] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:58.577 [2024-12-09 23:13:25.807566] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:58.577 [2024-12-09 23:13:25.807591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.577 [2024-12-09 23:13:25.807602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:58.577 [2024-12-09 23:13:25.807615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:31:58.577 [2024-12-09 23:13:25.807625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.577 [2024-12-09 23:13:25.809873] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:58.577 [2024-12-09 23:13:25.830741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.577 [2024-12-09 23:13:25.830826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:58.577 [2024-12-09 23:13:25.830844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.898 ms 00:31:58.577 [2024-12-09 23:13:25.830855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.577 [2024-12-09 23:13:25.830992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.577 [2024-12-09 23:13:25.831007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:58.577 [2024-12-09 23:13:25.831018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:58.577 [2024-12-09 23:13:25.831030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.577 [2024-12-09 23:13:25.842589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.577 [2024-12-09 23:13:25.842654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:58.577 [2024-12-09 23:13:25.842670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.460 ms 00:31:58.578 [2024-12-09 23:13:25.842688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.578 [2024-12-09 23:13:25.842800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.578 [2024-12-09 23:13:25.842817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:58.578 [2024-12-09 23:13:25.842829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:58.578 [2024-12-09 23:13:25.842839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.578 [2024-12-09 23:13:25.842927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.578 [2024-12-09 23:13:25.842941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:58.578 [2024-12-09 23:13:25.842952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:58.578 [2024-12-09 23:13:25.842963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.578 [2024-12-09 23:13:25.842997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:58.578 [2024-12-09 23:13:25.848764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.578 [2024-12-09 23:13:25.849009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:58.578 [2024-12-09 23:13:25.849045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.783 ms 00:31:58.578 [2024-12-09 23:13:25.849057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.578 [2024-12-09 23:13:25.849115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.578 [2024-12-09 23:13:25.849128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:58.578 [2024-12-09 23:13:25.849139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:58.578 [2024-12-09 23:13:25.849149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.578 [2024-12-09 23:13:25.849206] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:58.578 [2024-12-09 23:13:25.849233] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:58.578 [2024-12-09 23:13:25.849270] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:58.578 [2024-12-09 23:13:25.849291] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:58.578 [2024-12-09 23:13:25.849383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:58.578 [2024-12-09 23:13:25.849397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:58.578 [2024-12-09 23:13:25.849411] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:58.578 [2024-12-09 23:13:25.849424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:58.578 [2024-12-09 23:13:25.849437] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:58.578 [2024-12-09 23:13:25.849467] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:58.578 [2024-12-09 23:13:25.849478] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:58.578 [2024-12-09 23:13:25.849491] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:58.578 [2024-12-09 23:13:25.849502] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:58.578 [2024-12-09 23:13:25.849513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.578 [2024-12-09 23:13:25.849523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:58.578 [2024-12-09 23:13:25.849535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:31:58.578 [2024-12-09 23:13:25.849545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.578 [2024-12-09 23:13:25.849622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.578 [2024-12-09 23:13:25.849635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:58.578 [2024-12-09 23:13:25.849646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:58.578 [2024-12-09 23:13:25.849657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.578 [2024-12-09 23:13:25.849756] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:58.578 [2024-12-09 23:13:25.849771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:58.578 [2024-12-09 23:13:25.849783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.578 [2024-12-09 23:13:25.849793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.578 [2024-12-09 23:13:25.849804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:58.578 [2024-12-09 23:13:25.849814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:58.578 [2024-12-09 23:13:25.849824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:58.578 [2024-12-09 23:13:25.849834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:58.578 [2024-12-09 23:13:25.849844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:58.578 [2024-12-09 23:13:25.849853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.578 [2024-12-09 23:13:25.849864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:58.578 [2024-12-09 23:13:25.849873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:58.578 [2024-12-09 23:13:25.849882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:58.578 [2024-12-09 23:13:25.849903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:58.578 [2024-12-09 23:13:25.849913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:58.578 [2024-12-09 23:13:25.849923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.578 [2024-12-09 23:13:25.849932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:58.578 [2024-12-09 23:13:25.849941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:58.578 [2024-12-09 23:13:25.849951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.578 [2024-12-09 23:13:25.849961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:58.578 [2024-12-09 23:13:25.849971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:58.578 [2024-12-09 23:13:25.849980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.578 [2024-12-09 23:13:25.849989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:58.578 [2024-12-09 23:13:25.849998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:58.578 [2024-12-09 23:13:25.850007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.578 [2024-12-09 23:13:25.850017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:58.578 [2024-12-09 23:13:25.850026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:58.578 [2024-12-09 23:13:25.850035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.578 [2024-12-09 23:13:25.850044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:58.578 [2024-12-09 23:13:25.850053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:58.578 [2024-12-09 23:13:25.850062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:58.578 [2024-12-09 23:13:25.850071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:58.578 [2024-12-09 23:13:25.850081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:58.578 [2024-12-09 23:13:25.850090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.578 [2024-12-09 23:13:25.850099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:58.578 [2024-12-09 23:13:25.850108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:58.578 [2024-12-09 23:13:25.850117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:58.578 [2024-12-09 23:13:25.850126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:58.578 [2024-12-09 23:13:25.850135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:58.578 [2024-12-09 23:13:25.850144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.578 [2024-12-09 23:13:25.850153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:58.578 [2024-12-09 23:13:25.850162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:58.578 [2024-12-09 23:13:25.850174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.578 [2024-12-09 23:13:25.850185] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:58.578 [2024-12-09 23:13:25.850196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:58.578 [2024-12-09 23:13:25.850205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:58.578 [2024-12-09 23:13:25.850216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:58.578 [2024-12-09 23:13:25.850226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:58.578 [2024-12-09 23:13:25.850236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:58.578 [2024-12-09 23:13:25.850245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:58.578 [2024-12-09 23:13:25.850255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:58.578 [2024-12-09 23:13:25.850264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:58.578 [2024-12-09 23:13:25.850274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:58.578 [2024-12-09 23:13:25.850285] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:58.578 [2024-12-09 23:13:25.850297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.578 [2024-12-09 23:13:25.850312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:58.578 [2024-12-09 23:13:25.850323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:58.578 [2024-12-09 23:13:25.850334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:58.578 [2024-12-09 23:13:25.850345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:58.578 [2024-12-09 23:13:25.850355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:58.578 [2024-12-09 23:13:25.850365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:58.578 [2024-12-09 23:13:25.850375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:58.578 [2024-12-09 23:13:25.850386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:58.578 [2024-12-09 23:13:25.850396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:58.579 [2024-12-09 23:13:25.850407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:58.579 [2024-12-09 23:13:25.850417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:58.579 [2024-12-09 23:13:25.850427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:58.579 [2024-12-09 23:13:25.850438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:58.579 [2024-12-09 23:13:25.850469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:58.579 [2024-12-09 23:13:25.850480] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:58.579 [2024-12-09 23:13:25.850492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:58.579 [2024-12-09 23:13:25.850504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:58.579 [2024-12-09 23:13:25.850516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:58.579 [2024-12-09 23:13:25.850526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:58.579 [2024-12-09 23:13:25.850538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:58.579 [2024-12-09 23:13:25.850549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.579 [2024-12-09 23:13:25.850560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:58.579 [2024-12-09 23:13:25.850571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:31:58.579 [2024-12-09 23:13:25.850582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.579 [2024-12-09 23:13:25.896459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.579 [2024-12-09 23:13:25.896529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:58.579 [2024-12-09 23:13:25.896547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.885 ms 00:31:58.579 [2024-12-09 23:13:25.896563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.579 [2024-12-09 23:13:25.896672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.579 [2024-12-09 23:13:25.896683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:58.579 [2024-12-09 23:13:25.896694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:58.579 [2024-12-09 23:13:25.896705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:25.951990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:25.952066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:58.839 [2024-12-09 23:13:25.952083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.283 ms 00:31:58.839 [2024-12-09 23:13:25.952094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:25.952170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:25.952182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:58.839 [2024-12-09 23:13:25.952199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:58.839 [2024-12-09 23:13:25.952210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:25.952775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:25.952795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:58.839 [2024-12-09 23:13:25.952806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:31:58.839 [2024-12-09 23:13:25.952817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:25.952955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:25.952971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:58.839 [2024-12-09 23:13:25.952990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:31:58.839 [2024-12-09 23:13:25.953000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:25.972538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:25.972615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:58.839 [2024-12-09 23:13:25.972632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.545 ms 00:31:58.839 [2024-12-09 23:13:25.972643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:25.992755] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:58.839 [2024-12-09 23:13:25.992823] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:58.839 [2024-12-09 23:13:25.992844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:25.992856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:58.839 [2024-12-09 23:13:25.992870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.072 ms 00:31:58.839 [2024-12-09 23:13:25.992881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:26.024528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:26.024625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:58.839 [2024-12-09 23:13:26.024644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.603 ms 00:31:58.839 [2024-12-09 23:13:26.024656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:26.044975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:26.045054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:58.839 [2024-12-09 23:13:26.045071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.210 ms 00:31:58.839 [2024-12-09 23:13:26.045082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:26.065154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:26.065236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:58.839 [2024-12-09 23:13:26.065253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.021 ms 00:31:58.839 [2024-12-09 23:13:26.065264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:26.066155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:26.066197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:58.839 [2024-12-09 23:13:26.066216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:31:58.839 [2024-12-09 23:13:26.066226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:26.159026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:58.839 [2024-12-09 23:13:26.159122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:58.839 [2024-12-09 23:13:26.159153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.918 ms 00:31:58.839 [2024-12-09 23:13:26.159164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:58.839 [2024-12-09 23:13:26.173058] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:59.099 [2024-12-09 23:13:26.176771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.099 [2024-12-09 23:13:26.176828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:59.099 [2024-12-09 23:13:26.176844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.543 ms 00:31:59.099 [2024-12-09 23:13:26.176855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.099 [2024-12-09 23:13:26.176984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.099 [2024-12-09 23:13:26.177000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:59.099 [2024-12-09 23:13:26.177017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:59.099 [2024-12-09 23:13:26.177027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.099 [2024-12-09 23:13:26.177143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.099 [2024-12-09 23:13:26.177158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:59.099 [2024-12-09 23:13:26.177169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:59.099 [2024-12-09 23:13:26.177180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.099 [2024-12-09 23:13:26.177205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.099 [2024-12-09 23:13:26.177217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:59.099 [2024-12-09 23:13:26.177228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:59.099 [2024-12-09 23:13:26.177238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.099 [2024-12-09 23:13:26.177277] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:59.099 [2024-12-09 23:13:26.177289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.099 [2024-12-09 23:13:26.177300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:59.099 [2024-12-09 23:13:26.177311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:59.099 [2024-12-09 23:13:26.177321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.099 [2024-12-09 23:13:26.219014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.099 [2024-12-09 23:13:26.219301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:59.099 [2024-12-09 23:13:26.219342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.735 ms 00:31:59.099 [2024-12-09 23:13:26.219355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.099 [2024-12-09 23:13:26.219477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.099 [2024-12-09 23:13:26.219491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:59.099 [2024-12-09 23:13:26.219503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:59.099 [2024-12-09 23:13:26.219513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.099 [2024-12-09 23:13:26.220840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 415.115 ms, result 0 00:32:00.035  [2024-12-09T23:13:28.314Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-09T23:13:29.249Z] Copying: 50/1024 [MB] (26 MBps) [2024-12-09T23:13:30.625Z] Copying: 76/1024 [MB] (25 MBps) [2024-12-09T23:13:31.565Z] Copying: 102/1024 [MB] (25 MBps) [2024-12-09T23:13:32.503Z] Copying: 126/1024 [MB] (24 MBps) [2024-12-09T23:13:33.437Z] Copying: 151/1024 [MB] (24 MBps) [2024-12-09T23:13:34.372Z] Copying: 176/1024 [MB] (24 MBps) [2024-12-09T23:13:35.305Z] Copying: 201/1024 [MB] (24 MBps) [2024-12-09T23:13:36.248Z] Copying: 225/1024 [MB] (24 MBps) [2024-12-09T23:13:37.629Z] Copying: 250/1024 [MB] (24 MBps) [2024-12-09T23:13:38.563Z] Copying: 276/1024 [MB] (25 MBps) [2024-12-09T23:13:39.500Z] Copying: 300/1024 [MB] (24 MBps) [2024-12-09T23:13:40.488Z] Copying: 325/1024 [MB] (25 MBps) [2024-12-09T23:13:41.423Z] Copying: 350/1024 [MB] (24 MBps) [2024-12-09T23:13:42.357Z] Copying: 376/1024 [MB] (25 MBps) [2024-12-09T23:13:43.290Z] Copying: 402/1024 [MB] (25 MBps) [2024-12-09T23:13:44.226Z] Copying: 428/1024 [MB] (26 MBps) [2024-12-09T23:13:45.603Z] Copying: 454/1024 [MB] (25 MBps) [2024-12-09T23:13:46.539Z] Copying: 479/1024 [MB] (25 MBps) [2024-12-09T23:13:47.475Z] Copying: 504/1024 [MB] (25 MBps) [2024-12-09T23:13:48.409Z] Copying: 531/1024 [MB] (26 MBps) [2024-12-09T23:13:49.348Z] Copying: 558/1024 [MB] (27 MBps) [2024-12-09T23:13:50.283Z] Copying: 586/1024 [MB] (27 MBps) [2024-12-09T23:13:51.219Z] Copying: 614/1024 [MB] (27 MBps) [2024-12-09T23:13:52.602Z] Copying: 639/1024 [MB] (25 MBps) [2024-12-09T23:13:53.539Z] Copying: 665/1024 [MB] (25 MBps) [2024-12-09T23:13:54.475Z] Copying: 690/1024 [MB] (25 MBps) [2024-12-09T23:13:55.410Z] Copying: 716/1024 [MB] (25 MBps) [2024-12-09T23:13:56.346Z] Copying: 743/1024 [MB] (26 MBps) [2024-12-09T23:13:57.308Z] Copying: 769/1024 [MB] (26 MBps) [2024-12-09T23:13:58.247Z] Copying: 794/1024 [MB] (24 MBps) [2024-12-09T23:13:59.188Z] Copying: 818/1024 [MB] (24 MBps) [2024-12-09T23:14:00.568Z] Copying: 843/1024 [MB] (24 MBps) [2024-12-09T23:14:01.520Z] Copying: 870/1024 [MB] (26 MBps) [2024-12-09T23:14:02.454Z] Copying: 894/1024 [MB] (24 MBps) [2024-12-09T23:14:03.389Z] Copying: 920/1024 [MB] (26 MBps) [2024-12-09T23:14:04.325Z] Copying: 948/1024 [MB] (27 MBps) [2024-12-09T23:14:05.297Z] Copying: 976/1024 [MB] (27 MBps) [2024-12-09T23:14:06.235Z] Copying: 1003/1024 [MB] (27 MBps) [2024-12-09T23:14:06.805Z] Copying: 1023/1024 [MB] (19 MBps) [2024-12-09T23:14:06.805Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-09 23:14:06.703535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.469 [2024-12-09 23:14:06.703621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:39.469 [2024-12-09 23:14:06.703655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:39.469 [2024-12-09 23:14:06.703667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.469 [2024-12-09 23:14:06.706960] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:39.469 [2024-12-09 23:14:06.713101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.469 [2024-12-09 23:14:06.713156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:39.469 [2024-12-09 23:14:06.713173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.936 ms 00:32:39.469 [2024-12-09 23:14:06.713188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.469 [2024-12-09 23:14:06.723660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.469 [2024-12-09 23:14:06.723730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:39.469 [2024-12-09 23:14:06.723749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.641 ms 00:32:39.469 [2024-12-09 23:14:06.723774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.469 [2024-12-09 23:14:06.748279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.469 [2024-12-09 23:14:06.748358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:39.469 [2024-12-09 23:14:06.748376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.517 ms 00:32:39.469 [2024-12-09 23:14:06.748388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.469 [2024-12-09 23:14:06.753502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.469 [2024-12-09 23:14:06.753576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:39.469 [2024-12-09 23:14:06.753593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.078 ms 00:32:39.469 [2024-12-09 23:14:06.753614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.469 [2024-12-09 23:14:06.792466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.469 [2024-12-09 23:14:06.792766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:39.469 [2024-12-09 23:14:06.792795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.856 ms 00:32:39.469 [2024-12-09 23:14:06.792806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.727 [2024-12-09 23:14:06.815207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.727 [2024-12-09 23:14:06.815537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:39.727 [2024-12-09 23:14:06.815584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.348 ms 00:32:39.727 [2024-12-09 23:14:06.815597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.727 [2024-12-09 23:14:06.925417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.727 [2024-12-09 23:14:06.925505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:39.727 [2024-12-09 23:14:06.925525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.884 ms 00:32:39.727 [2024-12-09 23:14:06.925536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.727 [2024-12-09 23:14:06.966090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.727 [2024-12-09 23:14:06.966184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:39.727 [2024-12-09 23:14:06.966203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.595 ms 00:32:39.727 [2024-12-09 23:14:06.966213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.727 [2024-12-09 23:14:07.005559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.727 [2024-12-09 23:14:07.005637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:39.727 [2024-12-09 23:14:07.005655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.336 ms 00:32:39.727 [2024-12-09 23:14:07.005665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.727 [2024-12-09 23:14:07.044678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.727 [2024-12-09 23:14:07.044757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:39.727 [2024-12-09 23:14:07.044775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.989 ms 00:32:39.727 [2024-12-09 23:14:07.044786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.986 [2024-12-09 23:14:07.084144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.986 [2024-12-09 23:14:07.084226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:39.986 [2024-12-09 23:14:07.084244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.282 ms 00:32:39.986 [2024-12-09 23:14:07.084255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.986 [2024-12-09 23:14:07.084338] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:39.986 [2024-12-09 23:14:07.084361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111104 / 261120 wr_cnt: 1 state: open 00:32:39.986 [2024-12-09 23:14:07.084376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:39.986 [2024-12-09 23:14:07.084905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.084991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:39.987 [2024-12-09 23:14:07.085507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:39.987 [2024-12-09 23:14:07.085517] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c2221666-d845-4992-88fe-319b86a6eed4 00:32:39.987 [2024-12-09 23:14:07.085529] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111104 00:32:39.987 [2024-12-09 23:14:07.085539] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112064 00:32:39.987 [2024-12-09 23:14:07.085549] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111104 00:32:39.987 [2024-12-09 23:14:07.085560] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:32:39.987 [2024-12-09 23:14:07.085595] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:39.987 [2024-12-09 23:14:07.085605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:39.987 [2024-12-09 23:14:07.085616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:39.987 [2024-12-09 23:14:07.085625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:39.987 [2024-12-09 23:14:07.085635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:39.987 [2024-12-09 23:14:07.085646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.987 [2024-12-09 23:14:07.085656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:39.987 [2024-12-09 23:14:07.085667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.312 ms 00:32:39.987 [2024-12-09 23:14:07.085678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.987 [2024-12-09 23:14:07.106619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.987 [2024-12-09 23:14:07.106919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:39.987 [2024-12-09 23:14:07.106962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.917 ms 00:32:39.987 [2024-12-09 23:14:07.106974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.987 [2024-12-09 23:14:07.107625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:39.987 [2024-12-09 23:14:07.107639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:39.987 [2024-12-09 23:14:07.107651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:32:39.987 [2024-12-09 23:14:07.107662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.987 [2024-12-09 23:14:07.160133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.987 [2024-12-09 23:14:07.160211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:39.987 [2024-12-09 23:14:07.160228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.987 [2024-12-09 23:14:07.160239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.987 [2024-12-09 23:14:07.160316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.987 [2024-12-09 23:14:07.160328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:39.987 [2024-12-09 23:14:07.160340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.987 [2024-12-09 23:14:07.160350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.987 [2024-12-09 23:14:07.160487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.987 [2024-12-09 23:14:07.160510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:39.987 [2024-12-09 23:14:07.160521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.987 [2024-12-09 23:14:07.160531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.987 [2024-12-09 23:14:07.160551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.987 [2024-12-09 23:14:07.160562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:39.987 [2024-12-09 23:14:07.160572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.987 [2024-12-09 23:14:07.160583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:39.987 [2024-12-09 23:14:07.287881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:39.987 [2024-12-09 23:14:07.288184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:39.987 [2024-12-09 23:14:07.288211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:39.987 [2024-12-09 23:14:07.288223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.246 [2024-12-09 23:14:07.399552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.246 [2024-12-09 23:14:07.399640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:40.246 [2024-12-09 23:14:07.399656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.246 [2024-12-09 23:14:07.399668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-12-09 23:14:07.399783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.247 [2024-12-09 23:14:07.399797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:40.247 [2024-12-09 23:14:07.399808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.247 [2024-12-09 23:14:07.399823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-12-09 23:14:07.399876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.247 [2024-12-09 23:14:07.399889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:40.247 [2024-12-09 23:14:07.399900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.247 [2024-12-09 23:14:07.399911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-12-09 23:14:07.400036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.247 [2024-12-09 23:14:07.400051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:40.247 [2024-12-09 23:14:07.400062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.247 [2024-12-09 23:14:07.400078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-12-09 23:14:07.400120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.247 [2024-12-09 23:14:07.400133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:40.247 [2024-12-09 23:14:07.400144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.247 [2024-12-09 23:14:07.400155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-12-09 23:14:07.400195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.247 [2024-12-09 23:14:07.400206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:40.247 [2024-12-09 23:14:07.400217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.247 [2024-12-09 23:14:07.400227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-12-09 23:14:07.400276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:40.247 [2024-12-09 23:14:07.400289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:40.247 [2024-12-09 23:14:07.400300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:40.247 [2024-12-09 23:14:07.400310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:40.247 [2024-12-09 23:14:07.400437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 699.177 ms, result 0 00:32:42.155 00:32:42.155 00:32:42.155 23:14:08 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:32:42.155 [2024-12-09 23:14:09.097507] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:32:42.155 [2024-12-09 23:14:09.097861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81000 ] 00:32:42.155 [2024-12-09 23:14:09.284053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.155 [2024-12-09 23:14:09.420166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.729 [2024-12-09 23:14:09.822050] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:42.729 [2024-12-09 23:14:09.822143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:42.729 [2024-12-09 23:14:09.986622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:09.986953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:42.729 [2024-12-09 23:14:09.986981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:42.729 [2024-12-09 23:14:09.986993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:09.987084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:09.987100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:42.729 [2024-12-09 23:14:09.987112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:42.729 [2024-12-09 23:14:09.987122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:09.987146] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:42.729 [2024-12-09 23:14:09.988117] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:42.729 [2024-12-09 23:14:09.988146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:09.988157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:42.729 [2024-12-09 23:14:09.988169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:32:42.729 [2024-12-09 23:14:09.988179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:09.990335] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:42.729 [2024-12-09 23:14:10.010163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.010236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:42.729 [2024-12-09 23:14:10.010257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:32:42.729 [2024-12-09 23:14:10.010271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.010401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.010418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:42.729 [2024-12-09 23:14:10.010432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:42.729 [2024-12-09 23:14:10.010445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.021286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.021628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:42.729 [2024-12-09 23:14:10.021657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.713 ms 00:32:42.729 [2024-12-09 23:14:10.021677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.021780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.021794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:42.729 [2024-12-09 23:14:10.021806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:32:42.729 [2024-12-09 23:14:10.021816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.021898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.021910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:42.729 [2024-12-09 23:14:10.021921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:42.729 [2024-12-09 23:14:10.021932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.021963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:42.729 [2024-12-09 23:14:10.026913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.026953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:42.729 [2024-12-09 23:14:10.026969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.965 ms 00:32:42.729 [2024-12-09 23:14:10.026980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.027021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.027034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:42.729 [2024-12-09 23:14:10.027048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:42.729 [2024-12-09 23:14:10.027059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.027101] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:42.729 [2024-12-09 23:14:10.027127] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:42.729 [2024-12-09 23:14:10.027162] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:42.729 [2024-12-09 23:14:10.027184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:42.729 [2024-12-09 23:14:10.027274] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:42.729 [2024-12-09 23:14:10.027287] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:42.729 [2024-12-09 23:14:10.027301] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:42.729 [2024-12-09 23:14:10.027314] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:42.729 [2024-12-09 23:14:10.027326] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:42.729 [2024-12-09 23:14:10.027338] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:42.729 [2024-12-09 23:14:10.027350] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:42.729 [2024-12-09 23:14:10.027364] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:42.729 [2024-12-09 23:14:10.027375] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:42.729 [2024-12-09 23:14:10.027385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.027396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:42.729 [2024-12-09 23:14:10.027406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:32:42.729 [2024-12-09 23:14:10.027416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.027510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.729 [2024-12-09 23:14:10.027523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:42.729 [2024-12-09 23:14:10.027533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:42.729 [2024-12-09 23:14:10.027543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.729 [2024-12-09 23:14:10.027645] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:42.729 [2024-12-09 23:14:10.027660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:42.729 [2024-12-09 23:14:10.027671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:42.729 [2024-12-09 23:14:10.027682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:42.730 [2024-12-09 23:14:10.027702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:42.730 [2024-12-09 23:14:10.027722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:42.730 [2024-12-09 23:14:10.027732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:42.730 [2024-12-09 23:14:10.027753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:42.730 [2024-12-09 23:14:10.027763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:42.730 [2024-12-09 23:14:10.027772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:42.730 [2024-12-09 23:14:10.027792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:42.730 [2024-12-09 23:14:10.027803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:42.730 [2024-12-09 23:14:10.027812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:42.730 [2024-12-09 23:14:10.027832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:42.730 [2024-12-09 23:14:10.027841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:42.730 [2024-12-09 23:14:10.027861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.730 [2024-12-09 23:14:10.027880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:42.730 [2024-12-09 23:14:10.027890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.730 [2024-12-09 23:14:10.027909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:42.730 [2024-12-09 23:14:10.027919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.730 [2024-12-09 23:14:10.027937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:42.730 [2024-12-09 23:14:10.027946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:42.730 [2024-12-09 23:14:10.027964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:42.730 [2024-12-09 23:14:10.027973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:42.730 [2024-12-09 23:14:10.027982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:42.730 [2024-12-09 23:14:10.027991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:42.730 [2024-12-09 23:14:10.028000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:42.730 [2024-12-09 23:14:10.028009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:42.730 [2024-12-09 23:14:10.028018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:42.730 [2024-12-09 23:14:10.028027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:42.730 [2024-12-09 23:14:10.028036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.730 [2024-12-09 23:14:10.028045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:42.730 [2024-12-09 23:14:10.028055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:42.730 [2024-12-09 23:14:10.028066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.730 [2024-12-09 23:14:10.028075] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:42.730 [2024-12-09 23:14:10.028085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:42.730 [2024-12-09 23:14:10.028095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:42.730 [2024-12-09 23:14:10.028105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:42.730 [2024-12-09 23:14:10.028114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:42.730 [2024-12-09 23:14:10.028124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:42.730 [2024-12-09 23:14:10.028133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:42.730 [2024-12-09 23:14:10.028142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:42.730 [2024-12-09 23:14:10.028151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:42.730 [2024-12-09 23:14:10.028161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:42.730 [2024-12-09 23:14:10.028172] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:42.730 [2024-12-09 23:14:10.028184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.730 [2024-12-09 23:14:10.028199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:42.730 [2024-12-09 23:14:10.028209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:42.730 [2024-12-09 23:14:10.028219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:42.730 [2024-12-09 23:14:10.028230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:42.730 [2024-12-09 23:14:10.028240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:42.730 [2024-12-09 23:14:10.028251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:42.730 [2024-12-09 23:14:10.028261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:42.730 [2024-12-09 23:14:10.028271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:42.730 [2024-12-09 23:14:10.028281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:42.730 [2024-12-09 23:14:10.028291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:42.730 [2024-12-09 23:14:10.028301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:42.730 [2024-12-09 23:14:10.028311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:42.730 [2024-12-09 23:14:10.028322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:42.730 [2024-12-09 23:14:10.028332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:42.730 [2024-12-09 23:14:10.028343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:42.730 [2024-12-09 23:14:10.028354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:42.730 [2024-12-09 23:14:10.028366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:42.730 [2024-12-09 23:14:10.028377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:42.730 [2024-12-09 23:14:10.028387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:42.730 [2024-12-09 23:14:10.028399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:42.730 [2024-12-09 23:14:10.028410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.730 [2024-12-09 23:14:10.028420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:42.730 [2024-12-09 23:14:10.028430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:32:42.730 [2024-12-09 23:14:10.028441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.074467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.074542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:42.989 [2024-12-09 23:14:10.074561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.017 ms 00:32:42.989 [2024-12-09 23:14:10.074577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.074685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.074698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:42.989 [2024-12-09 23:14:10.074709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:42.989 [2024-12-09 23:14:10.074719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.139415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.139505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:42.989 [2024-12-09 23:14:10.139521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.704 ms 00:32:42.989 [2024-12-09 23:14:10.139533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.139602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.139615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:42.989 [2024-12-09 23:14:10.139632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:42.989 [2024-12-09 23:14:10.139642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.140512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.140537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:42.989 [2024-12-09 23:14:10.140549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:32:42.989 [2024-12-09 23:14:10.140560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.140685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.140701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:42.989 [2024-12-09 23:14:10.140720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:32:42.989 [2024-12-09 23:14:10.140731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.160873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.161150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:42.989 [2024-12-09 23:14:10.161182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.149 ms 00:32:42.989 [2024-12-09 23:14:10.161193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.181761] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:32:42.989 [2024-12-09 23:14:10.181835] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:42.989 [2024-12-09 23:14:10.181855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.181867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:42.989 [2024-12-09 23:14:10.181882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.526 ms 00:32:42.989 [2024-12-09 23:14:10.181893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.989 [2024-12-09 23:14:10.213526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.989 [2024-12-09 23:14:10.213891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:42.989 [2024-12-09 23:14:10.213922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.601 ms 00:32:42.989 [2024-12-09 23:14:10.213934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.990 [2024-12-09 23:14:10.234373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.990 [2024-12-09 23:14:10.234492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:42.990 [2024-12-09 23:14:10.234511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.389 ms 00:32:42.990 [2024-12-09 23:14:10.234522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.990 [2024-12-09 23:14:10.254597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.990 [2024-12-09 23:14:10.254676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:42.990 [2024-12-09 23:14:10.254692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.024 ms 00:32:42.990 [2024-12-09 23:14:10.254703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:42.990 [2024-12-09 23:14:10.255603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:42.990 [2024-12-09 23:14:10.255632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:42.990 [2024-12-09 23:14:10.255650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:32:42.990 [2024-12-09 23:14:10.255661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.249 [2024-12-09 23:14:10.349760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.249 [2024-12-09 23:14:10.349833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:43.250 [2024-12-09 23:14:10.349860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.215 ms 00:32:43.250 [2024-12-09 23:14:10.349872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.365014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:43.250 [2024-12-09 23:14:10.368558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.250 [2024-12-09 23:14:10.368611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:43.250 [2024-12-09 23:14:10.368628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.633 ms 00:32:43.250 [2024-12-09 23:14:10.368640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.368790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.250 [2024-12-09 23:14:10.368804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:43.250 [2024-12-09 23:14:10.368821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:43.250 [2024-12-09 23:14:10.368831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.370509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.250 [2024-12-09 23:14:10.370553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:43.250 [2024-12-09 23:14:10.370566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:32:43.250 [2024-12-09 23:14:10.370576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.370615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.250 [2024-12-09 23:14:10.370627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:43.250 [2024-12-09 23:14:10.370638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:43.250 [2024-12-09 23:14:10.370648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.370706] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:43.250 [2024-12-09 23:14:10.370719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.250 [2024-12-09 23:14:10.370730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:43.250 [2024-12-09 23:14:10.370740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:32:43.250 [2024-12-09 23:14:10.370750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.410884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.250 [2024-12-09 23:14:10.411186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:43.250 [2024-12-09 23:14:10.411332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.173 ms 00:32:43.250 [2024-12-09 23:14:10.411373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.411515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.250 [2024-12-09 23:14:10.411637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:43.250 [2024-12-09 23:14:10.411696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:43.250 [2024-12-09 23:14:10.411727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.250 [2024-12-09 23:14:10.413220] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.814 ms, result 0 00:32:44.627  [2024-12-09T23:14:12.898Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-09T23:14:13.834Z] Copying: 49/1024 [MB] (25 MBps) [2024-12-09T23:14:14.788Z] Copying: 75/1024 [MB] (25 MBps) [2024-12-09T23:14:15.733Z] Copying: 100/1024 [MB] (25 MBps) [2024-12-09T23:14:16.671Z] Copying: 126/1024 [MB] (26 MBps) [2024-12-09T23:14:18.048Z] Copying: 154/1024 [MB] (27 MBps) [2024-12-09T23:14:19.007Z] Copying: 180/1024 [MB] (26 MBps) [2024-12-09T23:14:19.944Z] Copying: 206/1024 [MB] (26 MBps) [2024-12-09T23:14:20.879Z] Copying: 234/1024 [MB] (27 MBps) [2024-12-09T23:14:21.810Z] Copying: 261/1024 [MB] (27 MBps) [2024-12-09T23:14:22.744Z] Copying: 288/1024 [MB] (27 MBps) [2024-12-09T23:14:23.685Z] Copying: 316/1024 [MB] (27 MBps) [2024-12-09T23:14:25.060Z] Copying: 344/1024 [MB] (27 MBps) [2024-12-09T23:14:25.991Z] Copying: 371/1024 [MB] (27 MBps) [2024-12-09T23:14:26.929Z] Copying: 398/1024 [MB] (27 MBps) [2024-12-09T23:14:27.887Z] Copying: 426/1024 [MB] (27 MBps) [2024-12-09T23:14:28.827Z] Copying: 453/1024 [MB] (27 MBps) [2024-12-09T23:14:29.761Z] Copying: 479/1024 [MB] (26 MBps) [2024-12-09T23:14:30.706Z] Copying: 505/1024 [MB] (25 MBps) [2024-12-09T23:14:31.643Z] Copying: 530/1024 [MB] (25 MBps) [2024-12-09T23:14:33.018Z] Copying: 556/1024 [MB] (25 MBps) [2024-12-09T23:14:34.022Z] Copying: 582/1024 [MB] (25 MBps) [2024-12-09T23:14:34.959Z] Copying: 608/1024 [MB] (25 MBps) [2024-12-09T23:14:35.892Z] Copying: 634/1024 [MB] (25 MBps) [2024-12-09T23:14:36.829Z] Copying: 659/1024 [MB] (25 MBps) [2024-12-09T23:14:37.841Z] Copying: 684/1024 [MB] (25 MBps) [2024-12-09T23:14:38.773Z] Copying: 710/1024 [MB] (25 MBps) [2024-12-09T23:14:39.708Z] Copying: 736/1024 [MB] (25 MBps) [2024-12-09T23:14:40.659Z] Copying: 761/1024 [MB] (25 MBps) [2024-12-09T23:14:41.610Z] Copying: 787/1024 [MB] (26 MBps) [2024-12-09T23:14:42.983Z] Copying: 814/1024 [MB] (27 MBps) [2024-12-09T23:14:43.939Z] Copying: 841/1024 [MB] (26 MBps) [2024-12-09T23:14:44.892Z] Copying: 867/1024 [MB] (26 MBps) [2024-12-09T23:14:45.824Z] Copying: 894/1024 [MB] (26 MBps) [2024-12-09T23:14:46.762Z] Copying: 921/1024 [MB] (27 MBps) [2024-12-09T23:14:47.697Z] Copying: 948/1024 [MB] (26 MBps) [2024-12-09T23:14:48.636Z] Copying: 975/1024 [MB] (26 MBps) [2024-12-09T23:14:49.573Z] Copying: 1001/1024 [MB] (26 MBps) [2024-12-09T23:14:50.215Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-09 23:14:49.952987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.879 [2024-12-09 23:14:49.953073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:22.879 [2024-12-09 23:14:49.953123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:22.879 [2024-12-09 23:14:49.953138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.879 [2024-12-09 23:14:49.953181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:22.879 [2024-12-09 23:14:49.957963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.879 [2024-12-09 23:14:49.958212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:22.879 [2024-12-09 23:14:49.958253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.754 ms 00:33:22.880 [2024-12-09 23:14:49.958271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.880 [2024-12-09 23:14:49.958596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.880 [2024-12-09 23:14:49.958628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:22.880 [2024-12-09 23:14:49.958666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:33:22.880 [2024-12-09 23:14:49.959013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.880 [2024-12-09 23:14:49.963383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.880 [2024-12-09 23:14:49.963664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:22.880 [2024-12-09 23:14:49.963687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:33:22.880 [2024-12-09 23:14:49.963705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.880 [2024-12-09 23:14:49.969411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.880 [2024-12-09 23:14:49.969485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:22.880 [2024-12-09 23:14:49.969509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.650 ms 00:33:22.880 [2024-12-09 23:14:49.969537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.880 [2024-12-09 23:14:50.015352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.880 [2024-12-09 23:14:50.015647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:22.880 [2024-12-09 23:14:50.015676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.802 ms 00:33:22.880 [2024-12-09 23:14:50.015693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.880 [2024-12-09 23:14:50.040170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.880 [2024-12-09 23:14:50.040248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:22.880 [2024-12-09 23:14:50.040275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.400 ms 00:33:22.880 [2024-12-09 23:14:50.040293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:22.880 [2024-12-09 23:14:50.183846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:22.880 [2024-12-09 23:14:50.183940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:22.880 [2024-12-09 23:14:50.183969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 143.668 ms 00:33:22.880 [2024-12-09 23:14:50.183991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.140 [2024-12-09 23:14:50.225962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.140 [2024-12-09 23:14:50.226219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:23.140 [2024-12-09 23:14:50.226258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.001 ms 00:33:23.140 [2024-12-09 23:14:50.226273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.140 [2024-12-09 23:14:50.265930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.140 [2024-12-09 23:14:50.266013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:23.140 [2024-12-09 23:14:50.266040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.627 ms 00:33:23.140 [2024-12-09 23:14:50.266057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.140 [2024-12-09 23:14:50.305439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.140 [2024-12-09 23:14:50.305523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:23.140 [2024-12-09 23:14:50.305549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.349 ms 00:33:23.140 [2024-12-09 23:14:50.305567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.140 [2024-12-09 23:14:50.345277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.140 [2024-12-09 23:14:50.345360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:23.140 [2024-12-09 23:14:50.345389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.585 ms 00:33:23.140 [2024-12-09 23:14:50.345407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.140 [2024-12-09 23:14:50.345542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:23.140 [2024-12-09 23:14:50.345606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:33:23.140 [2024-12-09 23:14:50.345630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.345995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:23.140 [2024-12-09 23:14:50.346279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.346988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:23.141 [2024-12-09 23:14:50.347442] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:23.141 [2024-12-09 23:14:50.347472] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c2221666-d845-4992-88fe-319b86a6eed4 00:33:23.141 [2024-12-09 23:14:50.347492] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:33:23.141 [2024-12-09 23:14:50.347510] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 20928 00:33:23.141 [2024-12-09 23:14:50.347526] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 19968 00:33:23.141 [2024-12-09 23:14:50.347544] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0481 00:33:23.141 [2024-12-09 23:14:50.347572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:23.141 [2024-12-09 23:14:50.347605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:23.141 [2024-12-09 23:14:50.347622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:23.141 [2024-12-09 23:14:50.347637] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:23.141 [2024-12-09 23:14:50.347651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:23.141 [2024-12-09 23:14:50.347671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.141 [2024-12-09 23:14:50.347689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:23.141 [2024-12-09 23:14:50.347707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.134 ms 00:33:23.141 [2024-12-09 23:14:50.347724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.141 [2024-12-09 23:14:50.369042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.141 [2024-12-09 23:14:50.369118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:23.141 [2024-12-09 23:14:50.369155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.255 ms 00:33:23.141 [2024-12-09 23:14:50.369187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.141 [2024-12-09 23:14:50.369816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:23.141 [2024-12-09 23:14:50.369860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:23.141 [2024-12-09 23:14:50.369883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:33:23.141 [2024-12-09 23:14:50.369899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.141 [2024-12-09 23:14:50.422276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.141 [2024-12-09 23:14:50.422361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:23.141 [2024-12-09 23:14:50.422388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.141 [2024-12-09 23:14:50.422405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.141 [2024-12-09 23:14:50.422546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.141 [2024-12-09 23:14:50.422569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:23.141 [2024-12-09 23:14:50.422588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.141 [2024-12-09 23:14:50.422603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.141 [2024-12-09 23:14:50.422736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.141 [2024-12-09 23:14:50.422758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:23.141 [2024-12-09 23:14:50.422784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.141 [2024-12-09 23:14:50.422801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.142 [2024-12-09 23:14:50.422831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.142 [2024-12-09 23:14:50.422850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:23.142 [2024-12-09 23:14:50.422866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.142 [2024-12-09 23:14:50.422881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.550716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.550814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:23.400 [2024-12-09 23:14:50.550841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.550858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.660042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.660335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:23.400 [2024-12-09 23:14:50.660377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.660394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.660543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.660565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:23.400 [2024-12-09 23:14:50.660583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.660607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.660685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.660706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:23.400 [2024-12-09 23:14:50.660724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.660740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.660907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.660930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:23.400 [2024-12-09 23:14:50.660948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.660965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.661033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.661053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:23.400 [2024-12-09 23:14:50.661069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.661087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.661143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.661161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:23.400 [2024-12-09 23:14:50.661177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.661194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.661259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:23.400 [2024-12-09 23:14:50.661277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:23.400 [2024-12-09 23:14:50.661295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:23.400 [2024-12-09 23:14:50.661310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:23.400 [2024-12-09 23:14:50.661518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 709.591 ms, result 0 00:33:24.770 00:33:24.770 00:33:24.770 23:14:51 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:26.204 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:26.204 23:14:53 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:33:26.204 23:14:53 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:33:26.204 23:14:53 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79479 00:33:26.462 23:14:53 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79479 ']' 00:33:26.462 Process with pid 79479 is not found 00:33:26.462 23:14:53 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79479 00:33:26.462 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79479) - No such process 00:33:26.462 23:14:53 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79479 is not found' 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:33:26.462 Remove shared memory files 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:26.462 23:14:53 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:33:26.462 ************************************ 00:33:26.462 END TEST ftl_restore 00:33:26.462 ************************************ 00:33:26.462 00:33:26.462 real 3m15.046s 00:33:26.462 user 3m0.503s 00:33:26.462 sys 0m15.381s 00:33:26.462 23:14:53 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.462 23:14:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:33:26.721 23:14:53 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:26.721 23:14:53 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:26.721 23:14:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.721 23:14:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:26.721 ************************************ 00:33:26.721 START TEST ftl_dirty_shutdown 00:33:26.721 ************************************ 00:33:26.721 23:14:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:33:26.721 * Looking for test storage... 00:33:26.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:33:26.721 23:14:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:26.721 23:14:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:33:26.721 23:14:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.721 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:33:26.981 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:33:26.981 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.981 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:33:26.981 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.982 --rc genhtml_branch_coverage=1 00:33:26.982 --rc genhtml_function_coverage=1 00:33:26.982 --rc genhtml_legend=1 00:33:26.982 --rc geninfo_all_blocks=1 00:33:26.982 --rc geninfo_unexecuted_blocks=1 00:33:26.982 00:33:26.982 ' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.982 --rc genhtml_branch_coverage=1 00:33:26.982 --rc genhtml_function_coverage=1 00:33:26.982 --rc genhtml_legend=1 00:33:26.982 --rc geninfo_all_blocks=1 00:33:26.982 --rc geninfo_unexecuted_blocks=1 00:33:26.982 00:33:26.982 ' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.982 --rc genhtml_branch_coverage=1 00:33:26.982 --rc genhtml_function_coverage=1 00:33:26.982 --rc genhtml_legend=1 00:33:26.982 --rc geninfo_all_blocks=1 00:33:26.982 --rc geninfo_unexecuted_blocks=1 00:33:26.982 00:33:26.982 ' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:26.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.982 --rc genhtml_branch_coverage=1 00:33:26.982 --rc genhtml_function_coverage=1 00:33:26.982 --rc genhtml_legend=1 00:33:26.982 --rc geninfo_all_blocks=1 00:33:26.982 --rc geninfo_unexecuted_blocks=1 00:33:26.982 00:33:26.982 ' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:33:26.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81519 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81519 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81519 ']' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.982 23:14:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:26.982 [2024-12-09 23:14:54.220519] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:33:26.982 [2024-12-09 23:14:54.220892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81519 ] 00:33:27.241 [2024-12-09 23:14:54.408129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.241 [2024-12-09 23:14:54.537878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:33:28.172 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:28.740 23:14:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:33:28.740 23:14:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:28.740 { 00:33:28.740 "name": "nvme0n1", 00:33:28.740 "aliases": [ 00:33:28.740 "b249ff26-5cf3-4eda-8642-93d8d1d877a1" 00:33:28.740 ], 00:33:28.740 "product_name": "NVMe disk", 00:33:28.740 "block_size": 4096, 00:33:28.740 "num_blocks": 1310720, 00:33:28.740 "uuid": "b249ff26-5cf3-4eda-8642-93d8d1d877a1", 00:33:28.740 "numa_id": -1, 00:33:28.740 "assigned_rate_limits": { 00:33:28.740 "rw_ios_per_sec": 0, 00:33:28.740 "rw_mbytes_per_sec": 0, 00:33:28.740 "r_mbytes_per_sec": 0, 00:33:28.740 "w_mbytes_per_sec": 0 00:33:28.740 }, 00:33:28.740 "claimed": true, 00:33:28.740 "claim_type": "read_many_write_one", 00:33:28.740 "zoned": false, 00:33:28.740 "supported_io_types": { 00:33:28.740 "read": true, 00:33:28.740 "write": true, 00:33:28.740 "unmap": true, 00:33:28.740 "flush": true, 00:33:28.740 "reset": true, 00:33:28.740 "nvme_admin": true, 00:33:28.740 "nvme_io": true, 00:33:28.740 "nvme_io_md": false, 00:33:28.740 "write_zeroes": true, 00:33:28.740 "zcopy": false, 00:33:28.740 "get_zone_info": false, 00:33:28.740 "zone_management": false, 00:33:28.740 "zone_append": false, 00:33:28.740 "compare": true, 00:33:28.740 "compare_and_write": false, 00:33:28.740 "abort": true, 00:33:28.740 "seek_hole": false, 00:33:28.740 "seek_data": false, 00:33:28.740 "copy": true, 00:33:28.740 "nvme_iov_md": false 00:33:28.740 }, 00:33:28.740 "driver_specific": { 00:33:28.740 "nvme": [ 00:33:28.740 { 00:33:28.740 "pci_address": "0000:00:11.0", 00:33:28.740 "trid": { 00:33:28.740 "trtype": "PCIe", 00:33:28.740 "traddr": "0000:00:11.0" 00:33:28.740 }, 00:33:28.740 "ctrlr_data": { 00:33:28.740 "cntlid": 0, 00:33:28.740 "vendor_id": "0x1b36", 00:33:28.740 "model_number": "QEMU NVMe Ctrl", 00:33:28.740 "serial_number": "12341", 00:33:28.740 "firmware_revision": "8.0.0", 00:33:28.740 "subnqn": "nqn.2019-08.org.qemu:12341", 00:33:28.740 "oacs": { 00:33:28.740 "security": 0, 00:33:28.740 "format": 1, 00:33:28.740 "firmware": 0, 00:33:28.740 "ns_manage": 1 00:33:28.740 }, 00:33:28.740 "multi_ctrlr": false, 00:33:28.740 "ana_reporting": false 00:33:28.740 }, 00:33:28.740 "vs": { 00:33:28.740 "nvme_version": "1.4" 00:33:28.740 }, 00:33:28.740 "ns_data": { 00:33:28.740 "id": 1, 00:33:28.740 "can_share": false 00:33:28.740 } 00:33:28.740 } 00:33:28.740 ], 00:33:28.740 "mp_policy": "active_passive" 00:33:28.740 } 00:33:28.740 } 00:33:28.740 ]' 00:33:28.740 23:14:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:28.999 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:29.259 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=83556be6-7d3c-496e-b993-3b4965b65d60 00:33:29.259 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:33:29.259 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 83556be6-7d3c-496e-b993-3b4965b65d60 00:33:29.259 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:33:29.519 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=305afbe7-467f-4732-a943-bbdb4cc3250c 00:33:29.519 23:14:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 305afbe7-467f-4732-a943-bbdb4cc3250c 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:29.802 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:30.109 { 00:33:30.109 "name": "06cbc073-6d20-43e3-8f38-34a56cbacb4c", 00:33:30.109 "aliases": [ 00:33:30.109 "lvs/nvme0n1p0" 00:33:30.109 ], 00:33:30.109 "product_name": "Logical Volume", 00:33:30.109 "block_size": 4096, 00:33:30.109 "num_blocks": 26476544, 00:33:30.109 "uuid": "06cbc073-6d20-43e3-8f38-34a56cbacb4c", 00:33:30.109 "assigned_rate_limits": { 00:33:30.109 "rw_ios_per_sec": 0, 00:33:30.109 "rw_mbytes_per_sec": 0, 00:33:30.109 "r_mbytes_per_sec": 0, 00:33:30.109 "w_mbytes_per_sec": 0 00:33:30.109 }, 00:33:30.109 "claimed": false, 00:33:30.109 "zoned": false, 00:33:30.109 "supported_io_types": { 00:33:30.109 "read": true, 00:33:30.109 "write": true, 00:33:30.109 "unmap": true, 00:33:30.109 "flush": false, 00:33:30.109 "reset": true, 00:33:30.109 "nvme_admin": false, 00:33:30.109 "nvme_io": false, 00:33:30.109 "nvme_io_md": false, 00:33:30.109 "write_zeroes": true, 00:33:30.109 "zcopy": false, 00:33:30.109 "get_zone_info": false, 00:33:30.109 "zone_management": false, 00:33:30.109 "zone_append": false, 00:33:30.109 "compare": false, 00:33:30.109 "compare_and_write": false, 00:33:30.109 "abort": false, 00:33:30.109 "seek_hole": true, 00:33:30.109 "seek_data": true, 00:33:30.109 "copy": false, 00:33:30.109 "nvme_iov_md": false 00:33:30.109 }, 00:33:30.109 "driver_specific": { 00:33:30.109 "lvol": { 00:33:30.109 "lvol_store_uuid": "305afbe7-467f-4732-a943-bbdb4cc3250c", 00:33:30.109 "base_bdev": "nvme0n1", 00:33:30.109 "thin_provision": true, 00:33:30.109 "num_allocated_clusters": 0, 00:33:30.109 "snapshot": false, 00:33:30.109 "clone": false, 00:33:30.109 "esnap_clone": false 00:33:30.109 } 00:33:30.109 } 00:33:30.109 } 00:33:30.109 ]' 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:33:30.109 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:30.368 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:30.627 { 00:33:30.627 "name": "06cbc073-6d20-43e3-8f38-34a56cbacb4c", 00:33:30.627 "aliases": [ 00:33:30.627 "lvs/nvme0n1p0" 00:33:30.627 ], 00:33:30.627 "product_name": "Logical Volume", 00:33:30.627 "block_size": 4096, 00:33:30.627 "num_blocks": 26476544, 00:33:30.627 "uuid": "06cbc073-6d20-43e3-8f38-34a56cbacb4c", 00:33:30.627 "assigned_rate_limits": { 00:33:30.627 "rw_ios_per_sec": 0, 00:33:30.627 "rw_mbytes_per_sec": 0, 00:33:30.627 "r_mbytes_per_sec": 0, 00:33:30.627 "w_mbytes_per_sec": 0 00:33:30.627 }, 00:33:30.627 "claimed": false, 00:33:30.627 "zoned": false, 00:33:30.627 "supported_io_types": { 00:33:30.627 "read": true, 00:33:30.627 "write": true, 00:33:30.627 "unmap": true, 00:33:30.627 "flush": false, 00:33:30.627 "reset": true, 00:33:30.627 "nvme_admin": false, 00:33:30.627 "nvme_io": false, 00:33:30.627 "nvme_io_md": false, 00:33:30.627 "write_zeroes": true, 00:33:30.627 "zcopy": false, 00:33:30.627 "get_zone_info": false, 00:33:30.627 "zone_management": false, 00:33:30.627 "zone_append": false, 00:33:30.627 "compare": false, 00:33:30.627 "compare_and_write": false, 00:33:30.627 "abort": false, 00:33:30.627 "seek_hole": true, 00:33:30.627 "seek_data": true, 00:33:30.627 "copy": false, 00:33:30.627 "nvme_iov_md": false 00:33:30.627 }, 00:33:30.627 "driver_specific": { 00:33:30.627 "lvol": { 00:33:30.627 "lvol_store_uuid": "305afbe7-467f-4732-a943-bbdb4cc3250c", 00:33:30.627 "base_bdev": "nvme0n1", 00:33:30.627 "thin_provision": true, 00:33:30.627 "num_allocated_clusters": 0, 00:33:30.627 "snapshot": false, 00:33:30.627 "clone": false, 00:33:30.627 "esnap_clone": false 00:33:30.627 } 00:33:30.627 } 00:33:30.627 } 00:33:30.627 ]' 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:33:30.627 23:14:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:33:30.886 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:33:30.886 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:30.886 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:30.887 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:33:30.887 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:33:30.887 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:33:30.887 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 06cbc073-6d20-43e3-8f38-34a56cbacb4c 00:33:31.145 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:33:31.145 { 00:33:31.145 "name": "06cbc073-6d20-43e3-8f38-34a56cbacb4c", 00:33:31.145 "aliases": [ 00:33:31.145 "lvs/nvme0n1p0" 00:33:31.145 ], 00:33:31.145 "product_name": "Logical Volume", 00:33:31.145 "block_size": 4096, 00:33:31.145 "num_blocks": 26476544, 00:33:31.145 "uuid": "06cbc073-6d20-43e3-8f38-34a56cbacb4c", 00:33:31.145 "assigned_rate_limits": { 00:33:31.145 "rw_ios_per_sec": 0, 00:33:31.145 "rw_mbytes_per_sec": 0, 00:33:31.145 "r_mbytes_per_sec": 0, 00:33:31.145 "w_mbytes_per_sec": 0 00:33:31.146 }, 00:33:31.146 "claimed": false, 00:33:31.146 "zoned": false, 00:33:31.146 "supported_io_types": { 00:33:31.146 "read": true, 00:33:31.146 "write": true, 00:33:31.146 "unmap": true, 00:33:31.146 "flush": false, 00:33:31.146 "reset": true, 00:33:31.146 "nvme_admin": false, 00:33:31.146 "nvme_io": false, 00:33:31.146 "nvme_io_md": false, 00:33:31.146 "write_zeroes": true, 00:33:31.146 "zcopy": false, 00:33:31.146 "get_zone_info": false, 00:33:31.146 "zone_management": false, 00:33:31.146 "zone_append": false, 00:33:31.146 "compare": false, 00:33:31.146 "compare_and_write": false, 00:33:31.146 "abort": false, 00:33:31.146 "seek_hole": true, 00:33:31.146 "seek_data": true, 00:33:31.146 "copy": false, 00:33:31.146 "nvme_iov_md": false 00:33:31.146 }, 00:33:31.146 "driver_specific": { 00:33:31.146 "lvol": { 00:33:31.146 "lvol_store_uuid": "305afbe7-467f-4732-a943-bbdb4cc3250c", 00:33:31.146 "base_bdev": "nvme0n1", 00:33:31.146 "thin_provision": true, 00:33:31.146 "num_allocated_clusters": 0, 00:33:31.146 "snapshot": false, 00:33:31.146 "clone": false, 00:33:31.146 "esnap_clone": false 00:33:31.146 } 00:33:31.146 } 00:33:31.146 } 00:33:31.146 ]' 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 06cbc073-6d20-43e3-8f38-34a56cbacb4c --l2p_dram_limit 10' 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:33:31.146 23:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 06cbc073-6d20-43e3-8f38-34a56cbacb4c --l2p_dram_limit 10 -c nvc0n1p0 00:33:31.405 [2024-12-09 23:14:58.629807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.405 [2024-12-09 23:14:58.629880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:31.405 [2024-12-09 23:14:58.629913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:31.405 [2024-12-09 23:14:58.629930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.405 [2024-12-09 23:14:58.630040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.405 [2024-12-09 23:14:58.630062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:31.405 [2024-12-09 23:14:58.630084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:31.405 [2024-12-09 23:14:58.630100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.405 [2024-12-09 23:14:58.630151] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:31.406 [2024-12-09 23:14:58.631359] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:31.406 [2024-12-09 23:14:58.631422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.631443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:31.406 [2024-12-09 23:14:58.631477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:33:31.406 [2024-12-09 23:14:58.631496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.631688] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 916e0778-c8dd-45bc-ac27-6bd810c141dd 00:33:31.406 [2024-12-09 23:14:58.634318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.634364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:33:31.406 [2024-12-09 23:14:58.634387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:31.406 [2024-12-09 23:14:58.634406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.647732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.647806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:31.406 [2024-12-09 23:14:58.647832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.195 ms 00:33:31.406 [2024-12-09 23:14:58.647854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.648029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.648061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:31.406 [2024-12-09 23:14:58.648080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:33:31.406 [2024-12-09 23:14:58.648107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.648242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.648267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:31.406 [2024-12-09 23:14:58.648290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:33:31.406 [2024-12-09 23:14:58.648311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.648354] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:31.406 [2024-12-09 23:14:58.654413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.654483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:31.406 [2024-12-09 23:14:58.654514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.076 ms 00:33:31.406 [2024-12-09 23:14:58.654530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.654601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.654619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:31.406 [2024-12-09 23:14:58.654640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:31.406 [2024-12-09 23:14:58.654656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.654726] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:33:31.406 [2024-12-09 23:14:58.654913] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:31.406 [2024-12-09 23:14:58.654946] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:31.406 [2024-12-09 23:14:58.654969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:31.406 [2024-12-09 23:14:58.654994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655013] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655036] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:31.406 [2024-12-09 23:14:58.655052] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:31.406 [2024-12-09 23:14:58.655079] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:31.406 [2024-12-09 23:14:58.655096] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:31.406 [2024-12-09 23:14:58.655116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.655148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:31.406 [2024-12-09 23:14:58.655170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:33:31.406 [2024-12-09 23:14:58.655186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.655293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.406 [2024-12-09 23:14:58.655311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:31.406 [2024-12-09 23:14:58.655333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:31.406 [2024-12-09 23:14:58.655349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.406 [2024-12-09 23:14:58.655482] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:31.406 [2024-12-09 23:14:58.655507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:31.406 [2024-12-09 23:14:58.655530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:31.406 [2024-12-09 23:14:58.655583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:31.406 [2024-12-09 23:14:58.655637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:31.406 [2024-12-09 23:14:58.655671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:31.406 [2024-12-09 23:14:58.655688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:31.406 [2024-12-09 23:14:58.655710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:31.406 [2024-12-09 23:14:58.655726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:31.406 [2024-12-09 23:14:58.655745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:31.406 [2024-12-09 23:14:58.655759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:31.406 [2024-12-09 23:14:58.655798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:31.406 [2024-12-09 23:14:58.655850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:31.406 [2024-12-09 23:14:58.655901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:31.406 [2024-12-09 23:14:58.655954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:31.406 [2024-12-09 23:14:58.655969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.406 [2024-12-09 23:14:58.655986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:31.406 [2024-12-09 23:14:58.656003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:31.406 [2024-12-09 23:14:58.656022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:31.406 [2024-12-09 23:14:58.656036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:31.406 [2024-12-09 23:14:58.656059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:31.406 [2024-12-09 23:14:58.656075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:31.406 [2024-12-09 23:14:58.656093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:31.406 [2024-12-09 23:14:58.656110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:31.406 [2024-12-09 23:14:58.656129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:31.406 [2024-12-09 23:14:58.656145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:31.406 [2024-12-09 23:14:58.656166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:31.406 [2024-12-09 23:14:58.656182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.406 [2024-12-09 23:14:58.656202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:31.406 [2024-12-09 23:14:58.656217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:31.406 [2024-12-09 23:14:58.656237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.406 [2024-12-09 23:14:58.656252] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:31.406 [2024-12-09 23:14:58.656273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:31.406 [2024-12-09 23:14:58.656290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:31.406 [2024-12-09 23:14:58.656311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:31.406 [2024-12-09 23:14:58.656327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:31.406 [2024-12-09 23:14:58.656351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:31.406 [2024-12-09 23:14:58.656367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:31.406 [2024-12-09 23:14:58.656386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:31.406 [2024-12-09 23:14:58.656402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:31.406 [2024-12-09 23:14:58.656422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:31.406 [2024-12-09 23:14:58.656440] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:31.406 [2024-12-09 23:14:58.656481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:31.406 [2024-12-09 23:14:58.656501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:31.406 [2024-12-09 23:14:58.656521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:31.407 [2024-12-09 23:14:58.656539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:31.407 [2024-12-09 23:14:58.656559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:31.407 [2024-12-09 23:14:58.656575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:31.407 [2024-12-09 23:14:58.656596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:31.407 [2024-12-09 23:14:58.656614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:31.407 [2024-12-09 23:14:58.656635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:31.407 [2024-12-09 23:14:58.656652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:31.407 [2024-12-09 23:14:58.656678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:31.407 [2024-12-09 23:14:58.656694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:31.407 [2024-12-09 23:14:58.656716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:31.407 [2024-12-09 23:14:58.656737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:31.407 [2024-12-09 23:14:58.656758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:31.407 [2024-12-09 23:14:58.656776] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:31.407 [2024-12-09 23:14:58.656799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:31.407 [2024-12-09 23:14:58.656819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:31.407 [2024-12-09 23:14:58.656841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:31.407 [2024-12-09 23:14:58.656857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:31.407 [2024-12-09 23:14:58.656879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:31.407 [2024-12-09 23:14:58.656897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:31.407 [2024-12-09 23:14:58.656918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:31.407 [2024-12-09 23:14:58.656936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:33:31.407 [2024-12-09 23:14:58.656956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:31.407 [2024-12-09 23:14:58.657027] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:33:31.407 [2024-12-09 23:14:58.657056] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:33:34.706 [2024-12-09 23:15:01.919788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:01.919878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:33:34.706 [2024-12-09 23:15:01.919912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3268.055 ms 00:33:34.706 [2024-12-09 23:15:01.919927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.706 [2024-12-09 23:15:01.965847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:01.965918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:34.706 [2024-12-09 23:15:01.965936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.628 ms 00:33:34.706 [2024-12-09 23:15:01.965950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.706 [2024-12-09 23:15:01.966125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:01.966142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:34.706 [2024-12-09 23:15:01.966154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:34.706 [2024-12-09 23:15:01.966175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.706 [2024-12-09 23:15:02.015553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:02.015625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:34.706 [2024-12-09 23:15:02.015642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.390 ms 00:33:34.706 [2024-12-09 23:15:02.015656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.706 [2024-12-09 23:15:02.015715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:02.015736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:34.706 [2024-12-09 23:15:02.015747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:34.706 [2024-12-09 23:15:02.015773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.706 [2024-12-09 23:15:02.016287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:02.016308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:34.706 [2024-12-09 23:15:02.016320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:33:34.706 [2024-12-09 23:15:02.016333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.706 [2024-12-09 23:15:02.016445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:02.016481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:34.706 [2024-12-09 23:15:02.016496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:33:34.706 [2024-12-09 23:15:02.016512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.706 [2024-12-09 23:15:02.037649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.706 [2024-12-09 23:15:02.037728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:34.706 [2024-12-09 23:15:02.037746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.147 ms 00:33:34.706 [2024-12-09 23:15:02.037760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.964 [2024-12-09 23:15:02.064987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:34.964 [2024-12-09 23:15:02.069349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.964 [2024-12-09 23:15:02.069614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:34.964 [2024-12-09 23:15:02.069653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.497 ms 00:33:34.964 [2024-12-09 23:15:02.069665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.964 [2024-12-09 23:15:02.161465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.964 [2024-12-09 23:15:02.161551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:33:34.964 [2024-12-09 23:15:02.161572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.867 ms 00:33:34.964 [2024-12-09 23:15:02.161584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.964 [2024-12-09 23:15:02.161822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.964 [2024-12-09 23:15:02.161843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:34.964 [2024-12-09 23:15:02.161861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:33:34.964 [2024-12-09 23:15:02.161872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.964 [2024-12-09 23:15:02.204833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.964 [2024-12-09 23:15:02.205150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:33:34.964 [2024-12-09 23:15:02.205187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.934 ms 00:33:34.964 [2024-12-09 23:15:02.205199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.964 [2024-12-09 23:15:02.246914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.964 [2024-12-09 23:15:02.246990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:33:34.964 [2024-12-09 23:15:02.247012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.684 ms 00:33:34.964 [2024-12-09 23:15:02.247024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:34.964 [2024-12-09 23:15:02.247848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:34.964 [2024-12-09 23:15:02.247875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:34.964 [2024-12-09 23:15:02.247891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:33:34.964 [2024-12-09 23:15:02.247907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.224 [2024-12-09 23:15:02.359144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.224 [2024-12-09 23:15:02.359224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:33:35.224 [2024-12-09 23:15:02.359251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.311 ms 00:33:35.224 [2024-12-09 23:15:02.359263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.224 [2024-12-09 23:15:02.402741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.224 [2024-12-09 23:15:02.402824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:33:35.224 [2024-12-09 23:15:02.402846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.371 ms 00:33:35.224 [2024-12-09 23:15:02.402857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.224 [2024-12-09 23:15:02.445842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.224 [2024-12-09 23:15:02.446165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:33:35.224 [2024-12-09 23:15:02.446200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.958 ms 00:33:35.224 [2024-12-09 23:15:02.446211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.224 [2024-12-09 23:15:02.488617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.224 [2024-12-09 23:15:02.488695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:35.224 [2024-12-09 23:15:02.488717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.386 ms 00:33:35.224 [2024-12-09 23:15:02.488728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.224 [2024-12-09 23:15:02.488822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.224 [2024-12-09 23:15:02.488835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:35.224 [2024-12-09 23:15:02.488855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:35.224 [2024-12-09 23:15:02.488867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.224 [2024-12-09 23:15:02.489007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:35.224 [2024-12-09 23:15:02.489027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:35.224 [2024-12-09 23:15:02.489041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:35.224 [2024-12-09 23:15:02.489052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:35.224 [2024-12-09 23:15:02.490566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3866.526 ms, result 0 00:33:35.224 { 00:33:35.224 "name": "ftl0", 00:33:35.224 "uuid": "916e0778-c8dd-45bc-ac27-6bd810c141dd" 00:33:35.224 } 00:33:35.224 23:15:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:33:35.224 23:15:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:33:35.482 23:15:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:33:35.482 23:15:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:33:35.482 23:15:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:33:35.741 /dev/nbd0 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:33:35.741 1+0 records in 00:33:35.741 1+0 records out 00:33:35.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410704 s, 10.0 MB/s 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:33:35.741 23:15:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:33:36.000 [2024-12-09 23:15:03.127284] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:33:36.000 [2024-12-09 23:15:03.127434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81668 ] 00:33:36.000 [2024-12-09 23:15:03.307073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.258 [2024-12-09 23:15:03.446006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.641  [2024-12-09T23:15:05.910Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-09T23:15:07.277Z] Copying: 389/1024 [MB] (195 MBps) [2024-12-09T23:15:07.867Z] Copying: 585/1024 [MB] (195 MBps) [2024-12-09T23:15:09.241Z] Copying: 778/1024 [MB] (193 MBps) [2024-12-09T23:15:09.241Z] Copying: 967/1024 [MB] (188 MBps) [2024-12-09T23:15:10.615Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:33:43.279 00:33:43.280 23:15:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:33:45.178 23:15:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:33:45.178 [2024-12-09 23:15:12.312968] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:33:45.178 [2024-12-09 23:15:12.313121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81759 ] 00:33:45.178 [2024-12-09 23:15:12.497525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.437 [2024-12-09 23:15:12.634059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:46.810  [2024-12-09T23:15:15.112Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-09T23:15:16.045Z] Copying: 34/1024 [MB] (17 MBps) [2024-12-09T23:15:17.421Z] Copying: 52/1024 [MB] (17 MBps) [2024-12-09T23:15:18.354Z] Copying: 69/1024 [MB] (17 MBps) [2024-12-09T23:15:19.291Z] Copying: 87/1024 [MB] (17 MBps) [2024-12-09T23:15:20.226Z] Copying: 104/1024 [MB] (17 MBps) [2024-12-09T23:15:21.159Z] Copying: 122/1024 [MB] (17 MBps) [2024-12-09T23:15:22.093Z] Copying: 139/1024 [MB] (17 MBps) [2024-12-09T23:15:23.028Z] Copying: 157/1024 [MB] (17 MBps) [2024-12-09T23:15:24.406Z] Copying: 174/1024 [MB] (17 MBps) [2024-12-09T23:15:25.342Z] Copying: 191/1024 [MB] (17 MBps) [2024-12-09T23:15:26.277Z] Copying: 208/1024 [MB] (16 MBps) [2024-12-09T23:15:27.212Z] Copying: 226/1024 [MB] (17 MBps) [2024-12-09T23:15:28.147Z] Copying: 243/1024 [MB] (16 MBps) [2024-12-09T23:15:29.084Z] Copying: 260/1024 [MB] (17 MBps) [2024-12-09T23:15:30.019Z] Copying: 277/1024 [MB] (17 MBps) [2024-12-09T23:15:31.397Z] Copying: 294/1024 [MB] (17 MBps) [2024-12-09T23:15:32.005Z] Copying: 312/1024 [MB] (17 MBps) [2024-12-09T23:15:33.380Z] Copying: 329/1024 [MB] (17 MBps) [2024-12-09T23:15:34.317Z] Copying: 347/1024 [MB] (17 MBps) [2024-12-09T23:15:35.255Z] Copying: 365/1024 [MB] (18 MBps) [2024-12-09T23:15:36.196Z] Copying: 383/1024 [MB] (18 MBps) [2024-12-09T23:15:37.135Z] Copying: 401/1024 [MB] (18 MBps) [2024-12-09T23:15:38.100Z] Copying: 419/1024 [MB] (18 MBps) [2024-12-09T23:15:39.036Z] Copying: 437/1024 [MB] (17 MBps) [2024-12-09T23:15:39.969Z] Copying: 454/1024 [MB] (17 MBps) [2024-12-09T23:15:41.343Z] Copying: 472/1024 [MB] (17 MBps) [2024-12-09T23:15:42.277Z] Copying: 490/1024 [MB] (17 MBps) [2024-12-09T23:15:43.220Z] Copying: 507/1024 [MB] (17 MBps) [2024-12-09T23:15:44.185Z] Copying: 524/1024 [MB] (17 MBps) [2024-12-09T23:15:45.126Z] Copying: 542/1024 [MB] (17 MBps) [2024-12-09T23:15:46.085Z] Copying: 560/1024 [MB] (17 MBps) [2024-12-09T23:15:47.026Z] Copying: 577/1024 [MB] (17 MBps) [2024-12-09T23:15:47.962Z] Copying: 595/1024 [MB] (17 MBps) [2024-12-09T23:15:49.340Z] Copying: 613/1024 [MB] (17 MBps) [2024-12-09T23:15:50.281Z] Copying: 630/1024 [MB] (17 MBps) [2024-12-09T23:15:51.229Z] Copying: 648/1024 [MB] (17 MBps) [2024-12-09T23:15:52.169Z] Copying: 666/1024 [MB] (17 MBps) [2024-12-09T23:15:53.103Z] Copying: 683/1024 [MB] (17 MBps) [2024-12-09T23:15:54.037Z] Copying: 700/1024 [MB] (17 MBps) [2024-12-09T23:15:55.079Z] Copying: 718/1024 [MB] (17 MBps) [2024-12-09T23:15:56.017Z] Copying: 736/1024 [MB] (17 MBps) [2024-12-09T23:15:56.952Z] Copying: 754/1024 [MB] (18 MBps) [2024-12-09T23:15:58.371Z] Copying: 772/1024 [MB] (17 MBps) [2024-12-09T23:15:58.937Z] Copying: 790/1024 [MB] (17 MBps) [2024-12-09T23:16:00.311Z] Copying: 807/1024 [MB] (17 MBps) [2024-12-09T23:16:01.254Z] Copying: 825/1024 [MB] (17 MBps) [2024-12-09T23:16:02.187Z] Copying: 842/1024 [MB] (17 MBps) [2024-12-09T23:16:03.123Z] Copying: 860/1024 [MB] (17 MBps) [2024-12-09T23:16:04.058Z] Copying: 878/1024 [MB] (18 MBps) [2024-12-09T23:16:04.992Z] Copying: 896/1024 [MB] (17 MBps) [2024-12-09T23:16:05.933Z] Copying: 913/1024 [MB] (17 MBps) [2024-12-09T23:16:07.352Z] Copying: 931/1024 [MB] (17 MBps) [2024-12-09T23:16:07.920Z] Copying: 949/1024 [MB] (17 MBps) [2024-12-09T23:16:09.296Z] Copying: 966/1024 [MB] (17 MBps) [2024-12-09T23:16:10.236Z] Copying: 984/1024 [MB] (17 MBps) [2024-12-09T23:16:11.174Z] Copying: 1002/1024 [MB] (17 MBps) [2024-12-09T23:16:11.174Z] Copying: 1019/1024 [MB] (17 MBps) [2024-12-09T23:16:12.562Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:34:45.226 00:34:45.226 23:16:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:34:45.226 23:16:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:34:45.487 23:16:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:34:45.747 [2024-12-09 23:16:12.831570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.831653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:45.747 [2024-12-09 23:16:12.831670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:45.747 [2024-12-09 23:16:12.831684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.831717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:45.747 [2024-12-09 23:16:12.836416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.836464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:45.747 [2024-12-09 23:16:12.836481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:34:45.747 [2024-12-09 23:16:12.836493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.838514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.838563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:45.747 [2024-12-09 23:16:12.838581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.972 ms 00:34:45.747 [2024-12-09 23:16:12.838592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.856909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.856989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:45.747 [2024-12-09 23:16:12.857010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.302 ms 00:34:45.747 [2024-12-09 23:16:12.857022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.862137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.862192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:45.747 [2024-12-09 23:16:12.862209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.064 ms 00:34:45.747 [2024-12-09 23:16:12.862220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.905153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.905259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:45.747 [2024-12-09 23:16:12.905282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.879 ms 00:34:45.747 [2024-12-09 23:16:12.905293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.930396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.930518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:45.747 [2024-12-09 23:16:12.930545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.040 ms 00:34:45.747 [2024-12-09 23:16:12.930556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.930783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.930799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:45.747 [2024-12-09 23:16:12.930814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:34:45.747 [2024-12-09 23:16:12.930824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:12.971872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:12.972228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:45.747 [2024-12-09 23:16:12.972265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.085 ms 00:34:45.747 [2024-12-09 23:16:12.972277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:13.014055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:13.014153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:45.747 [2024-12-09 23:16:13.014175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.741 ms 00:34:45.747 [2024-12-09 23:16:13.014188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:45.747 [2024-12-09 23:16:13.055202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:45.747 [2024-12-09 23:16:13.055275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:45.747 [2024-12-09 23:16:13.055295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.980 ms 00:34:45.747 [2024-12-09 23:16:13.055307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.008 [2024-12-09 23:16:13.096834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.008 [2024-12-09 23:16:13.096909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:46.008 [2024-12-09 23:16:13.096929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.377 ms 00:34:46.008 [2024-12-09 23:16:13.096940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.008 [2024-12-09 23:16:13.097023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:46.008 [2024-12-09 23:16:13.097043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.097995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.098006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.098023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.098034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.098047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.098058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.098072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:46.008 [2024-12-09 23:16:13.098082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:46.009 [2024-12-09 23:16:13.098388] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:46.009 [2024-12-09 23:16:13.098405] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 916e0778-c8dd-45bc-ac27-6bd810c141dd 00:34:46.009 [2024-12-09 23:16:13.098417] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:46.009 [2024-12-09 23:16:13.098432] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:46.009 [2024-12-09 23:16:13.098447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:46.009 [2024-12-09 23:16:13.098477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:46.009 [2024-12-09 23:16:13.098487] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:46.009 [2024-12-09 23:16:13.098501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:46.009 [2024-12-09 23:16:13.098511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:46.009 [2024-12-09 23:16:13.098524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:46.009 [2024-12-09 23:16:13.098533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:46.009 [2024-12-09 23:16:13.098560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.009 [2024-12-09 23:16:13.098571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:46.009 [2024-12-09 23:16:13.098584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.529 ms 00:34:46.009 [2024-12-09 23:16:13.098595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.009 [2024-12-09 23:16:13.119148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.009 [2024-12-09 23:16:13.119223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:46.009 [2024-12-09 23:16:13.119243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.483 ms 00:34:46.009 [2024-12-09 23:16:13.119254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.009 [2024-12-09 23:16:13.119904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:46.009 [2024-12-09 23:16:13.119918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:46.009 [2024-12-09 23:16:13.119932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:34:46.009 [2024-12-09 23:16:13.119943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.009 [2024-12-09 23:16:13.187074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.009 [2024-12-09 23:16:13.187143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:46.009 [2024-12-09 23:16:13.187163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.009 [2024-12-09 23:16:13.187175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.009 [2024-12-09 23:16:13.187277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.009 [2024-12-09 23:16:13.187290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:46.009 [2024-12-09 23:16:13.187304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.009 [2024-12-09 23:16:13.187314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.009 [2024-12-09 23:16:13.187436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.009 [2024-12-09 23:16:13.187479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:46.009 [2024-12-09 23:16:13.187494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.009 [2024-12-09 23:16:13.187504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.009 [2024-12-09 23:16:13.187534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.009 [2024-12-09 23:16:13.187546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:46.009 [2024-12-09 23:16:13.187560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.009 [2024-12-09 23:16:13.187570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.009 [2024-12-09 23:16:13.318969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.009 [2024-12-09 23:16:13.319031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:46.009 [2024-12-09 23:16:13.319051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.009 [2024-12-09 23:16:13.319062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.268 [2024-12-09 23:16:13.428573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.268 [2024-12-09 23:16:13.428658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:46.268 [2024-12-09 23:16:13.428677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.268 [2024-12-09 23:16:13.428688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.268 [2024-12-09 23:16:13.428826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.268 [2024-12-09 23:16:13.428839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:46.268 [2024-12-09 23:16:13.428858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.268 [2024-12-09 23:16:13.428869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.268 [2024-12-09 23:16:13.428935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.268 [2024-12-09 23:16:13.428948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:46.268 [2024-12-09 23:16:13.428963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.268 [2024-12-09 23:16:13.428974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.268 [2024-12-09 23:16:13.429129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.268 [2024-12-09 23:16:13.429144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:46.268 [2024-12-09 23:16:13.429158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.268 [2024-12-09 23:16:13.429173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.268 [2024-12-09 23:16:13.429225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.268 [2024-12-09 23:16:13.429238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:46.268 [2024-12-09 23:16:13.429252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.269 [2024-12-09 23:16:13.429262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.269 [2024-12-09 23:16:13.429307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.269 [2024-12-09 23:16:13.429320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:46.269 [2024-12-09 23:16:13.429334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.269 [2024-12-09 23:16:13.429347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.269 [2024-12-09 23:16:13.429401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:46.269 [2024-12-09 23:16:13.429413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:46.269 [2024-12-09 23:16:13.429426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:46.269 [2024-12-09 23:16:13.429437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:46.269 [2024-12-09 23:16:13.429620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 598.983 ms, result 0 00:34:46.269 true 00:34:46.269 23:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81519 00:34:46.269 23:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81519 00:34:46.269 23:16:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:34:46.269 [2024-12-09 23:16:13.563753] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:34:46.269 [2024-12-09 23:16:13.563901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82376 ] 00:34:46.527 [2024-12-09 23:16:13.745638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.785 [2024-12-09 23:16:13.869988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.160  [2024-12-09T23:16:16.433Z] Copying: 192/1024 [MB] (192 MBps) [2024-12-09T23:16:17.369Z] Copying: 384/1024 [MB] (192 MBps) [2024-12-09T23:16:18.332Z] Copying: 575/1024 [MB] (191 MBps) [2024-12-09T23:16:19.268Z] Copying: 763/1024 [MB] (188 MBps) [2024-12-09T23:16:19.836Z] Copying: 954/1024 [MB] (191 MBps) [2024-12-09T23:16:21.210Z] Copying: 1024/1024 [MB] (average 190 MBps) 00:34:53.874 00:34:53.874 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81519 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:34:53.874 23:16:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:53.874 [2024-12-09 23:16:20.920853] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:34:53.875 [2024-12-09 23:16:20.921256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82451 ] 00:34:53.875 [2024-12-09 23:16:21.106980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:54.133 [2024-12-09 23:16:21.245375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:54.392 [2024-12-09 23:16:21.640190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:54.392 [2024-12-09 23:16:21.640287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:54.392 [2024-12-09 23:16:21.707591] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:34:54.392 [2024-12-09 23:16:21.707919] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:34:54.392 [2024-12-09 23:16:21.708090] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:34:54.962 [2024-12-09 23:16:22.021428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.021512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:54.962 [2024-12-09 23:16:22.021530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:54.962 [2024-12-09 23:16:22.021562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.021632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.021646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:54.962 [2024-12-09 23:16:22.021658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:34:54.962 [2024-12-09 23:16:22.021669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.021692] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:54.962 [2024-12-09 23:16:22.022710] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:54.962 [2024-12-09 23:16:22.022739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.022751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:54.962 [2024-12-09 23:16:22.022763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:34:54.962 [2024-12-09 23:16:22.022773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.024629] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:54.962 [2024-12-09 23:16:22.044778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.044847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:54.962 [2024-12-09 23:16:22.044865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.180 ms 00:34:54.962 [2024-12-09 23:16:22.044876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.044994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.045008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:54.962 [2024-12-09 23:16:22.045021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:54.962 [2024-12-09 23:16:22.045032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.055736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.055795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:54.962 [2024-12-09 23:16:22.055811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.616 ms 00:34:54.962 [2024-12-09 23:16:22.055821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.055925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.055944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:54.962 [2024-12-09 23:16:22.055956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:34:54.962 [2024-12-09 23:16:22.055966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.056053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.056067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:54.962 [2024-12-09 23:16:22.056078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:34:54.962 [2024-12-09 23:16:22.056088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.056116] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:54.962 [2024-12-09 23:16:22.061372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.061413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:54.962 [2024-12-09 23:16:22.061427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.272 ms 00:34:54.962 [2024-12-09 23:16:22.061454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.061503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.061516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:54.962 [2024-12-09 23:16:22.061526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:54.962 [2024-12-09 23:16:22.061537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.061589] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:54.962 [2024-12-09 23:16:22.061615] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:54.962 [2024-12-09 23:16:22.061652] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:54.962 [2024-12-09 23:16:22.061672] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:54.962 [2024-12-09 23:16:22.061764] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:54.962 [2024-12-09 23:16:22.061780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:54.962 [2024-12-09 23:16:22.061794] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:54.962 [2024-12-09 23:16:22.061811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:54.962 [2024-12-09 23:16:22.061824] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:54.962 [2024-12-09 23:16:22.061836] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:54.962 [2024-12-09 23:16:22.061847] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:54.962 [2024-12-09 23:16:22.061857] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:54.962 [2024-12-09 23:16:22.061867] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:54.962 [2024-12-09 23:16:22.061878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.061889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:54.962 [2024-12-09 23:16:22.061900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:34:54.962 [2024-12-09 23:16:22.061910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.061983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.962 [2024-12-09 23:16:22.061999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:54.962 [2024-12-09 23:16:22.062009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:34:54.962 [2024-12-09 23:16:22.062020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.962 [2024-12-09 23:16:22.062117] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:54.962 [2024-12-09 23:16:22.062133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:54.962 [2024-12-09 23:16:22.062144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:54.962 [2024-12-09 23:16:22.062155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:54.962 [2024-12-09 23:16:22.062175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:54.962 [2024-12-09 23:16:22.062185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:54.962 [2024-12-09 23:16:22.062195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:54.962 [2024-12-09 23:16:22.062205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:54.962 [2024-12-09 23:16:22.062214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:54.962 [2024-12-09 23:16:22.062236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:54.962 [2024-12-09 23:16:22.062247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:54.962 [2024-12-09 23:16:22.062257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:54.962 [2024-12-09 23:16:22.062266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:54.962 [2024-12-09 23:16:22.062276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:54.962 [2024-12-09 23:16:22.062286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:54.962 [2024-12-09 23:16:22.062296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:54.962 [2024-12-09 23:16:22.062305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:54.962 [2024-12-09 23:16:22.062314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:54.962 [2024-12-09 23:16:22.062323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:54.963 [2024-12-09 23:16:22.062341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:54.963 [2024-12-09 23:16:22.062360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:54.963 [2024-12-09 23:16:22.062369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:54.963 [2024-12-09 23:16:22.062387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:54.963 [2024-12-09 23:16:22.062396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:54.963 [2024-12-09 23:16:22.062413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:54.963 [2024-12-09 23:16:22.062422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:54.963 [2024-12-09 23:16:22.062440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:54.963 [2024-12-09 23:16:22.062449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:54.963 [2024-12-09 23:16:22.062510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:54.963 [2024-12-09 23:16:22.062520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:54.963 [2024-12-09 23:16:22.062529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:54.963 [2024-12-09 23:16:22.062539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:54.963 [2024-12-09 23:16:22.062548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:54.963 [2024-12-09 23:16:22.062557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:54.963 [2024-12-09 23:16:22.062577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:54.963 [2024-12-09 23:16:22.062587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062597] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:54.963 [2024-12-09 23:16:22.062607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:54.963 [2024-12-09 23:16:22.062622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:54.963 [2024-12-09 23:16:22.062632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:54.963 [2024-12-09 23:16:22.062643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:54.963 [2024-12-09 23:16:22.062653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:54.963 [2024-12-09 23:16:22.062663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:54.963 [2024-12-09 23:16:22.062672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:54.963 [2024-12-09 23:16:22.062682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:54.963 [2024-12-09 23:16:22.062691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:54.963 [2024-12-09 23:16:22.062702] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:54.963 [2024-12-09 23:16:22.062715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:54.963 [2024-12-09 23:16:22.062727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:54.963 [2024-12-09 23:16:22.062739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:54.963 [2024-12-09 23:16:22.062750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:54.963 [2024-12-09 23:16:22.062761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:54.963 [2024-12-09 23:16:22.062774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:54.963 [2024-12-09 23:16:22.062784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:54.963 [2024-12-09 23:16:22.062795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:54.963 [2024-12-09 23:16:22.062805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:54.963 [2024-12-09 23:16:22.062816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:54.963 [2024-12-09 23:16:22.062826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:54.963 [2024-12-09 23:16:22.062836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:54.963 [2024-12-09 23:16:22.062846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:54.963 [2024-12-09 23:16:22.062857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:54.963 [2024-12-09 23:16:22.062867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:54.963 [2024-12-09 23:16:22.062877] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:54.963 [2024-12-09 23:16:22.062889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:54.963 [2024-12-09 23:16:22.062900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:54.963 [2024-12-09 23:16:22.062910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:54.963 [2024-12-09 23:16:22.062921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:54.963 [2024-12-09 23:16:22.062931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:54.963 [2024-12-09 23:16:22.062941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.062952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:54.963 [2024-12-09 23:16:22.062963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:34:54.963 [2024-12-09 23:16:22.062973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.110255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.110327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:54.963 [2024-12-09 23:16:22.110345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.298 ms 00:34:54.963 [2024-12-09 23:16:22.110357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.110494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.110509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:54.963 [2024-12-09 23:16:22.110520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:34:54.963 [2024-12-09 23:16:22.110531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.170835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.171155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:54.963 [2024-12-09 23:16:22.171192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.293 ms 00:34:54.963 [2024-12-09 23:16:22.171204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.171277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.171290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:54.963 [2024-12-09 23:16:22.171302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:54.963 [2024-12-09 23:16:22.171313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.171870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.171889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:54.963 [2024-12-09 23:16:22.171902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:34:54.963 [2024-12-09 23:16:22.171920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.172064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.172079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:54.963 [2024-12-09 23:16:22.172090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:34:54.963 [2024-12-09 23:16:22.172101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.193878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.194169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:54.963 [2024-12-09 23:16:22.194304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.784 ms 00:34:54.963 [2024-12-09 23:16:22.194346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.216738] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:54.963 [2024-12-09 23:16:22.217033] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:54.963 [2024-12-09 23:16:22.217139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.217174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:54.963 [2024-12-09 23:16:22.217208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.611 ms 00:34:54.963 [2024-12-09 23:16:22.217239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.249070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.249407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:54.963 [2024-12-09 23:16:22.249554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.785 ms 00:34:54.963 [2024-12-09 23:16:22.249598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.270420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.270735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:54.963 [2024-12-09 23:16:22.270851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.743 ms 00:34:54.963 [2024-12-09 23:16:22.270891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.963 [2024-12-09 23:16:22.291026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.963 [2024-12-09 23:16:22.291306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:54.964 [2024-12-09 23:16:22.291388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.055 ms 00:34:54.964 [2024-12-09 23:16:22.291424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:54.964 [2024-12-09 23:16:22.292322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:54.964 [2024-12-09 23:16:22.292479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:54.964 [2024-12-09 23:16:22.292557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:34:54.964 [2024-12-09 23:16:22.292592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.229 [2024-12-09 23:16:22.385518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.229 [2024-12-09 23:16:22.385837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:55.229 [2024-12-09 23:16:22.385971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.973 ms 00:34:55.230 [2024-12-09 23:16:22.386009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.400101] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:55.230 [2024-12-09 23:16:22.404004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.230 [2024-12-09 23:16:22.404227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:55.230 [2024-12-09 23:16:22.404347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.923 ms 00:34:55.230 [2024-12-09 23:16:22.404394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.404583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.230 [2024-12-09 23:16:22.404723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:55.230 [2024-12-09 23:16:22.404778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:55.230 [2024-12-09 23:16:22.404809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.404932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.230 [2024-12-09 23:16:22.404969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:55.230 [2024-12-09 23:16:22.404982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:55.230 [2024-12-09 23:16:22.404993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.405026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.230 [2024-12-09 23:16:22.405037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:55.230 [2024-12-09 23:16:22.405049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:55.230 [2024-12-09 23:16:22.405059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.405097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:55.230 [2024-12-09 23:16:22.405111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.230 [2024-12-09 23:16:22.405121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:55.230 [2024-12-09 23:16:22.405131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:55.230 [2024-12-09 23:16:22.405146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.444758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.230 [2024-12-09 23:16:22.444848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:55.230 [2024-12-09 23:16:22.444867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.651 ms 00:34:55.230 [2024-12-09 23:16:22.444879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.445014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:55.230 [2024-12-09 23:16:22.445028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:55.230 [2024-12-09 23:16:22.445041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:34:55.230 [2024-12-09 23:16:22.445051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:55.230 [2024-12-09 23:16:22.446397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 425.132 ms, result 0 00:34:56.167  [2024-12-09T23:16:24.877Z] Copying: 27/1024 [MB] (27 MBps) [2024-12-09T23:16:25.812Z] Copying: 54/1024 [MB] (27 MBps) [2024-12-09T23:16:26.747Z] Copying: 81/1024 [MB] (26 MBps) [2024-12-09T23:16:27.682Z] Copying: 106/1024 [MB] (25 MBps) [2024-12-09T23:16:28.640Z] Copying: 131/1024 [MB] (25 MBps) [2024-12-09T23:16:29.573Z] Copying: 156/1024 [MB] (24 MBps) [2024-12-09T23:16:30.508Z] Copying: 181/1024 [MB] (25 MBps) [2024-12-09T23:16:31.447Z] Copying: 205/1024 [MB] (24 MBps) [2024-12-09T23:16:32.823Z] Copying: 231/1024 [MB] (26 MBps) [2024-12-09T23:16:33.757Z] Copying: 258/1024 [MB] (26 MBps) [2024-12-09T23:16:34.692Z] Copying: 283/1024 [MB] (25 MBps) [2024-12-09T23:16:35.676Z] Copying: 309/1024 [MB] (25 MBps) [2024-12-09T23:16:36.613Z] Copying: 335/1024 [MB] (25 MBps) [2024-12-09T23:16:37.557Z] Copying: 360/1024 [MB] (25 MBps) [2024-12-09T23:16:38.492Z] Copying: 386/1024 [MB] (25 MBps) [2024-12-09T23:16:39.867Z] Copying: 412/1024 [MB] (25 MBps) [2024-12-09T23:16:40.435Z] Copying: 437/1024 [MB] (25 MBps) [2024-12-09T23:16:41.814Z] Copying: 463/1024 [MB] (26 MBps) [2024-12-09T23:16:42.747Z] Copying: 488/1024 [MB] (24 MBps) [2024-12-09T23:16:43.684Z] Copying: 513/1024 [MB] (24 MBps) [2024-12-09T23:16:44.633Z] Copying: 535/1024 [MB] (21 MBps) [2024-12-09T23:16:45.569Z] Copying: 559/1024 [MB] (24 MBps) [2024-12-09T23:16:46.503Z] Copying: 584/1024 [MB] (24 MBps) [2024-12-09T23:16:47.442Z] Copying: 608/1024 [MB] (24 MBps) [2024-12-09T23:16:48.426Z] Copying: 633/1024 [MB] (24 MBps) [2024-12-09T23:16:49.817Z] Copying: 658/1024 [MB] (24 MBps) [2024-12-09T23:16:50.750Z] Copying: 682/1024 [MB] (24 MBps) [2024-12-09T23:16:51.696Z] Copying: 707/1024 [MB] (24 MBps) [2024-12-09T23:16:52.632Z] Copying: 732/1024 [MB] (24 MBps) [2024-12-09T23:16:53.569Z] Copying: 756/1024 [MB] (24 MBps) [2024-12-09T23:16:54.504Z] Copying: 781/1024 [MB] (24 MBps) [2024-12-09T23:16:55.449Z] Copying: 806/1024 [MB] (24 MBps) [2024-12-09T23:16:56.821Z] Copying: 831/1024 [MB] (25 MBps) [2024-12-09T23:16:57.757Z] Copying: 856/1024 [MB] (25 MBps) [2024-12-09T23:16:58.691Z] Copying: 881/1024 [MB] (24 MBps) [2024-12-09T23:16:59.624Z] Copying: 905/1024 [MB] (24 MBps) [2024-12-09T23:17:00.554Z] Copying: 930/1024 [MB] (24 MBps) [2024-12-09T23:17:01.486Z] Copying: 955/1024 [MB] (25 MBps) [2024-12-09T23:17:02.417Z] Copying: 981/1024 [MB] (26 MBps) [2024-12-09T23:17:02.981Z] Copying: 1009/1024 [MB] (27 MBps) [2024-12-09T23:17:02.981Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-09 23:17:02.935744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.645 [2024-12-09 23:17:02.935802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:35.645 [2024-12-09 23:17:02.935820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:35.645 [2024-12-09 23:17:02.935832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.645 [2024-12-09 23:17:02.935858] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:35.645 [2024-12-09 23:17:02.940370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.645 [2024-12-09 23:17:02.940415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:35.645 [2024-12-09 23:17:02.940429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.501 ms 00:35:35.645 [2024-12-09 23:17:02.940441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.645 [2024-12-09 23:17:02.942500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.645 [2024-12-09 23:17:02.942546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:35.645 [2024-12-09 23:17:02.942561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.016 ms 00:35:35.645 [2024-12-09 23:17:02.942572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.645 [2024-12-09 23:17:02.961063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.645 [2024-12-09 23:17:02.961146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:35:35.645 [2024-12-09 23:17:02.961164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.494 ms 00:35:35.645 [2024-12-09 23:17:02.961175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.645 [2024-12-09 23:17:02.966231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.645 [2024-12-09 23:17:02.966298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:35:35.645 [2024-12-09 23:17:02.966313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.026 ms 00:35:35.645 [2024-12-09 23:17:02.966323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.006413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.904 [2024-12-09 23:17:03.006513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:35:35.904 [2024-12-09 23:17:03.006530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.074 ms 00:35:35.904 [2024-12-09 23:17:03.006543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.029440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.904 [2024-12-09 23:17:03.029538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:35:35.904 [2024-12-09 23:17:03.029556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.852 ms 00:35:35.904 [2024-12-09 23:17:03.029567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.031337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.904 [2024-12-09 23:17:03.031384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:35:35.904 [2024-12-09 23:17:03.031406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:35:35.904 [2024-12-09 23:17:03.031417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.072715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.904 [2024-12-09 23:17:03.073040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:35:35.904 [2024-12-09 23:17:03.073069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.341 ms 00:35:35.904 [2024-12-09 23:17:03.073098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.112706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.904 [2024-12-09 23:17:03.112780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:35:35.904 [2024-12-09 23:17:03.112796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.601 ms 00:35:35.904 [2024-12-09 23:17:03.112807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.152100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.904 [2024-12-09 23:17:03.152355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:35:35.904 [2024-12-09 23:17:03.152382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.287 ms 00:35:35.904 [2024-12-09 23:17:03.152394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.192257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.904 [2024-12-09 23:17:03.192332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:35:35.904 [2024-12-09 23:17:03.192350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.777 ms 00:35:35.904 [2024-12-09 23:17:03.192361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.904 [2024-12-09 23:17:03.192438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:35.904 [2024-12-09 23:17:03.192473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:35:35.904 [2024-12-09 23:17:03.192488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:35.904 [2024-12-09 23:17:03.192923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.192933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.192944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.192955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.192966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.192977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.192988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.192998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:35.905 [2024-12-09 23:17:03.193602] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:35.905 [2024-12-09 23:17:03.193612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 916e0778-c8dd-45bc-ac27-6bd810c141dd 00:35:35.905 [2024-12-09 23:17:03.193641] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:35:35.905 [2024-12-09 23:17:03.193656] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:35:35.905 [2024-12-09 23:17:03.193667] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:35:35.905 [2024-12-09 23:17:03.193678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:35:35.905 [2024-12-09 23:17:03.193689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:35.905 [2024-12-09 23:17:03.193699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:35.905 [2024-12-09 23:17:03.193709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:35.905 [2024-12-09 23:17:03.193718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:35.905 [2024-12-09 23:17:03.193727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:35.905 [2024-12-09 23:17:03.193737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.905 [2024-12-09 23:17:03.193748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:35.905 [2024-12-09 23:17:03.193759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:35:35.905 [2024-12-09 23:17:03.193769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.905 [2024-12-09 23:17:03.214473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.905 [2024-12-09 23:17:03.214535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:35.905 [2024-12-09 23:17:03.214550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.658 ms 00:35:35.905 [2024-12-09 23:17:03.214561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.905 [2024-12-09 23:17:03.215147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.905 [2024-12-09 23:17:03.215163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:35.905 [2024-12-09 23:17:03.215175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:35:35.905 [2024-12-09 23:17:03.215194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.164 [2024-12-09 23:17:03.267880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.164 [2024-12-09 23:17:03.267963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:36.164 [2024-12-09 23:17:03.267979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.164 [2024-12-09 23:17:03.267990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.164 [2024-12-09 23:17:03.268076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.164 [2024-12-09 23:17:03.268088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:36.164 [2024-12-09 23:17:03.268100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.164 [2024-12-09 23:17:03.268117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.164 [2024-12-09 23:17:03.268236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.164 [2024-12-09 23:17:03.268252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:36.164 [2024-12-09 23:17:03.268263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.164 [2024-12-09 23:17:03.268274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.164 [2024-12-09 23:17:03.268293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.164 [2024-12-09 23:17:03.268304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:36.164 [2024-12-09 23:17:03.268315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.164 [2024-12-09 23:17:03.268325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.164 [2024-12-09 23:17:03.395951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.164 [2024-12-09 23:17:03.396293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:36.164 [2024-12-09 23:17:03.396324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.164 [2024-12-09 23:17:03.396336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.423 [2024-12-09 23:17:03.506313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.424 [2024-12-09 23:17:03.506397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:36.424 [2024-12-09 23:17:03.506413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.424 [2024-12-09 23:17:03.506425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.424 [2024-12-09 23:17:03.506578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.424 [2024-12-09 23:17:03.506593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:36.424 [2024-12-09 23:17:03.506605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.424 [2024-12-09 23:17:03.506615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.424 [2024-12-09 23:17:03.506662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.424 [2024-12-09 23:17:03.506675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:36.424 [2024-12-09 23:17:03.506686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.424 [2024-12-09 23:17:03.506696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.424 [2024-12-09 23:17:03.506817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.424 [2024-12-09 23:17:03.506838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:36.424 [2024-12-09 23:17:03.506849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.424 [2024-12-09 23:17:03.506860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.424 [2024-12-09 23:17:03.506900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.424 [2024-12-09 23:17:03.506913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:36.424 [2024-12-09 23:17:03.506924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.424 [2024-12-09 23:17:03.506950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.424 [2024-12-09 23:17:03.506996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.424 [2024-12-09 23:17:03.507012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:36.424 [2024-12-09 23:17:03.507023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.424 [2024-12-09 23:17:03.507033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.424 [2024-12-09 23:17:03.507077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:36.424 [2024-12-09 23:17:03.507090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:36.424 [2024-12-09 23:17:03.507101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:36.424 [2024-12-09 23:17:03.507111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:36.424 [2024-12-09 23:17:03.507240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 572.385 ms, result 0 00:35:37.358 00:35:37.358 00:35:37.617 23:17:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:35:39.525 23:17:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:39.525 [2024-12-09 23:17:06.549195] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:35:39.525 [2024-12-09 23:17:06.549362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82899 ] 00:35:39.525 [2024-12-09 23:17:06.731414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.785 [2024-12-09 23:17:06.866104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.045 [2024-12-09 23:17:07.268595] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:40.045 [2024-12-09 23:17:07.268694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:40.310 [2024-12-09 23:17:07.432269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.432612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:40.311 [2024-12-09 23:17:07.432643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:40.311 [2024-12-09 23:17:07.432655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.432748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.432765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:40.311 [2024-12-09 23:17:07.432776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:35:40.311 [2024-12-09 23:17:07.432787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.432811] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:40.311 [2024-12-09 23:17:07.433928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:40.311 [2024-12-09 23:17:07.433959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.433971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:40.311 [2024-12-09 23:17:07.433983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:35:40.311 [2024-12-09 23:17:07.433993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.436137] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:35:40.311 [2024-12-09 23:17:07.456831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.456900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:40.311 [2024-12-09 23:17:07.456918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.725 ms 00:35:40.311 [2024-12-09 23:17:07.456929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.457045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.457059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:40.311 [2024-12-09 23:17:07.457071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:35:40.311 [2024-12-09 23:17:07.457082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.467795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.467850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:40.311 [2024-12-09 23:17:07.467866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.626 ms 00:35:40.311 [2024-12-09 23:17:07.467883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.467982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.468000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:40.311 [2024-12-09 23:17:07.468011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:35:40.311 [2024-12-09 23:17:07.468022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.468099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.468113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:40.311 [2024-12-09 23:17:07.468124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:35:40.311 [2024-12-09 23:17:07.468135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.468168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:40.311 [2024-12-09 23:17:07.473961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.474003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:40.311 [2024-12-09 23:17:07.474022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.809 ms 00:35:40.311 [2024-12-09 23:17:07.474032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.474074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.474086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:40.311 [2024-12-09 23:17:07.474097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:40.311 [2024-12-09 23:17:07.474108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.474153] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:40.311 [2024-12-09 23:17:07.474181] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:40.311 [2024-12-09 23:17:07.474216] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:40.311 [2024-12-09 23:17:07.474239] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:40.311 [2024-12-09 23:17:07.474329] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:40.311 [2024-12-09 23:17:07.474343] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:40.311 [2024-12-09 23:17:07.474357] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:40.311 [2024-12-09 23:17:07.474371] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:40.311 [2024-12-09 23:17:07.474385] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:40.311 [2024-12-09 23:17:07.474397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:40.311 [2024-12-09 23:17:07.474408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:40.311 [2024-12-09 23:17:07.474422] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:40.311 [2024-12-09 23:17:07.474432] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:40.311 [2024-12-09 23:17:07.474443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.474480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:40.311 [2024-12-09 23:17:07.474490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:35:40.311 [2024-12-09 23:17:07.474501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.474573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.311 [2024-12-09 23:17:07.474585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:40.311 [2024-12-09 23:17:07.474596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:35:40.311 [2024-12-09 23:17:07.474606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.311 [2024-12-09 23:17:07.474709] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:40.311 [2024-12-09 23:17:07.474724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:40.311 [2024-12-09 23:17:07.474735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:40.312 [2024-12-09 23:17:07.474747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:40.312 [2024-12-09 23:17:07.474768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:40.312 [2024-12-09 23:17:07.474787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:40.312 [2024-12-09 23:17:07.474797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:40.312 [2024-12-09 23:17:07.474816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:40.312 [2024-12-09 23:17:07.474826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:40.312 [2024-12-09 23:17:07.474835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:40.312 [2024-12-09 23:17:07.474856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:40.312 [2024-12-09 23:17:07.474866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:40.312 [2024-12-09 23:17:07.474876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:40.312 [2024-12-09 23:17:07.474895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:40.312 [2024-12-09 23:17:07.474905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:40.312 [2024-12-09 23:17:07.474923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.312 [2024-12-09 23:17:07.474943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:40.312 [2024-12-09 23:17:07.474952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.312 [2024-12-09 23:17:07.474971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:40.312 [2024-12-09 23:17:07.474980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:40.312 [2024-12-09 23:17:07.474989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.312 [2024-12-09 23:17:07.474997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:40.312 [2024-12-09 23:17:07.475007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:40.312 [2024-12-09 23:17:07.475016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:40.312 [2024-12-09 23:17:07.475025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:40.312 [2024-12-09 23:17:07.475034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:40.312 [2024-12-09 23:17:07.475043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:40.312 [2024-12-09 23:17:07.475052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:40.312 [2024-12-09 23:17:07.475061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:40.312 [2024-12-09 23:17:07.475070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:40.312 [2024-12-09 23:17:07.475079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:40.312 [2024-12-09 23:17:07.475088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:40.312 [2024-12-09 23:17:07.475097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.312 [2024-12-09 23:17:07.475106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:40.312 [2024-12-09 23:17:07.475115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:40.312 [2024-12-09 23:17:07.475125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.312 [2024-12-09 23:17:07.475135] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:40.312 [2024-12-09 23:17:07.475145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:40.312 [2024-12-09 23:17:07.475155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:40.312 [2024-12-09 23:17:07.475165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:40.312 [2024-12-09 23:17:07.475175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:40.312 [2024-12-09 23:17:07.475185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:40.312 [2024-12-09 23:17:07.475194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:40.312 [2024-12-09 23:17:07.475204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:40.312 [2024-12-09 23:17:07.475213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:40.312 [2024-12-09 23:17:07.475222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:40.312 [2024-12-09 23:17:07.475234] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:40.312 [2024-12-09 23:17:07.475246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:40.312 [2024-12-09 23:17:07.475262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:40.312 [2024-12-09 23:17:07.475273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:40.312 [2024-12-09 23:17:07.475283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:40.312 [2024-12-09 23:17:07.475293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:40.312 [2024-12-09 23:17:07.475304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:40.312 [2024-12-09 23:17:07.475316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:40.312 [2024-12-09 23:17:07.475326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:40.312 [2024-12-09 23:17:07.475336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:40.312 [2024-12-09 23:17:07.475347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:40.312 [2024-12-09 23:17:07.475357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:40.313 [2024-12-09 23:17:07.475368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:40.313 [2024-12-09 23:17:07.475378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:40.313 [2024-12-09 23:17:07.475388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:40.313 [2024-12-09 23:17:07.475399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:40.313 [2024-12-09 23:17:07.475409] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:40.313 [2024-12-09 23:17:07.475420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:40.313 [2024-12-09 23:17:07.475432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:40.313 [2024-12-09 23:17:07.475442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:40.313 [2024-12-09 23:17:07.475463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:40.313 [2024-12-09 23:17:07.475476] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:40.313 [2024-12-09 23:17:07.475487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.475498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:40.313 [2024-12-09 23:17:07.475509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:35:40.313 [2024-12-09 23:17:07.475520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.521107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.521182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:40.313 [2024-12-09 23:17:07.521199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.602 ms 00:35:40.313 [2024-12-09 23:17:07.521216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.521323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.521336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:40.313 [2024-12-09 23:17:07.521347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:35:40.313 [2024-12-09 23:17:07.521357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.578518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.578821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:40.313 [2024-12-09 23:17:07.578850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.163 ms 00:35:40.313 [2024-12-09 23:17:07.578863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.578938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.578950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:40.313 [2024-12-09 23:17:07.578970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:40.313 [2024-12-09 23:17:07.578980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.579545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.579565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:40.313 [2024-12-09 23:17:07.579578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:35:40.313 [2024-12-09 23:17:07.579589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.579721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.579736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:40.313 [2024-12-09 23:17:07.579756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:35:40.313 [2024-12-09 23:17:07.579767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.600149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.600219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:40.313 [2024-12-09 23:17:07.600236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.390 ms 00:35:40.313 [2024-12-09 23:17:07.600247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.313 [2024-12-09 23:17:07.620353] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:35:40.313 [2024-12-09 23:17:07.620664] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:40.313 [2024-12-09 23:17:07.620693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.313 [2024-12-09 23:17:07.620706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:40.313 [2024-12-09 23:17:07.620721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.316 ms 00:35:40.313 [2024-12-09 23:17:07.620731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.573 [2024-12-09 23:17:07.651957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.573 [2024-12-09 23:17:07.652292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:40.573 [2024-12-09 23:17:07.652323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.171 ms 00:35:40.573 [2024-12-09 23:17:07.652335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.573 [2024-12-09 23:17:07.671852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.573 [2024-12-09 23:17:07.671932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:40.573 [2024-12-09 23:17:07.671949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.455 ms 00:35:40.573 [2024-12-09 23:17:07.671959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.573 [2024-12-09 23:17:07.691381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.573 [2024-12-09 23:17:07.691688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:40.573 [2024-12-09 23:17:07.691716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.372 ms 00:35:40.573 [2024-12-09 23:17:07.691728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.573 [2024-12-09 23:17:07.692668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.573 [2024-12-09 23:17:07.692699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:40.573 [2024-12-09 23:17:07.692717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:35:40.573 [2024-12-09 23:17:07.692728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.573 [2024-12-09 23:17:07.787012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.573 [2024-12-09 23:17:07.787100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:40.573 [2024-12-09 23:17:07.787130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.406 ms 00:35:40.573 [2024-12-09 23:17:07.787141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.573 [2024-12-09 23:17:07.800383] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:40.573 [2024-12-09 23:17:07.804041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.573 [2024-12-09 23:17:07.804277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:40.573 [2024-12-09 23:17:07.804306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.832 ms 00:35:40.574 [2024-12-09 23:17:07.804319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.574 [2024-12-09 23:17:07.804459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.574 [2024-12-09 23:17:07.804475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:40.574 [2024-12-09 23:17:07.804491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:35:40.574 [2024-12-09 23:17:07.804501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.574 [2024-12-09 23:17:07.805604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.574 [2024-12-09 23:17:07.805623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:40.574 [2024-12-09 23:17:07.805635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:35:40.574 [2024-12-09 23:17:07.805646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.574 [2024-12-09 23:17:07.805676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.574 [2024-12-09 23:17:07.805687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:40.574 [2024-12-09 23:17:07.805698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:40.574 [2024-12-09 23:17:07.805708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.574 [2024-12-09 23:17:07.805748] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:40.574 [2024-12-09 23:17:07.805761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.574 [2024-12-09 23:17:07.805772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:40.574 [2024-12-09 23:17:07.805783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:40.574 [2024-12-09 23:17:07.805793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.574 [2024-12-09 23:17:07.844068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.574 [2024-12-09 23:17:07.844157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:40.574 [2024-12-09 23:17:07.844202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.313 ms 00:35:40.574 [2024-12-09 23:17:07.844214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.574 [2024-12-09 23:17:07.844324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:40.574 [2024-12-09 23:17:07.844338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:40.574 [2024-12-09 23:17:07.844350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:35:40.574 [2024-12-09 23:17:07.844360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:40.574 [2024-12-09 23:17:07.846049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.953 ms, result 0 00:35:41.969  [2024-12-09T23:17:10.241Z] Copying: 1240/1048576 [kB] (1240 kBps) [2024-12-09T23:17:11.179Z] Copying: 2864/1048576 [kB] (1624 kBps) [2024-12-09T23:17:12.134Z] Copying: 15/1024 [MB] (12 MBps) [2024-12-09T23:17:13.070Z] Copying: 47/1024 [MB] (31 MBps) [2024-12-09T23:17:14.451Z] Copying: 78/1024 [MB] (31 MBps) [2024-12-09T23:17:15.387Z] Copying: 110/1024 [MB] (31 MBps) [2024-12-09T23:17:16.323Z] Copying: 141/1024 [MB] (31 MBps) [2024-12-09T23:17:17.269Z] Copying: 174/1024 [MB] (32 MBps) [2024-12-09T23:17:18.223Z] Copying: 206/1024 [MB] (32 MBps) [2024-12-09T23:17:19.160Z] Copying: 238/1024 [MB] (32 MBps) [2024-12-09T23:17:20.093Z] Copying: 271/1024 [MB] (32 MBps) [2024-12-09T23:17:21.468Z] Copying: 303/1024 [MB] (32 MBps) [2024-12-09T23:17:22.403Z] Copying: 341/1024 [MB] (38 MBps) [2024-12-09T23:17:23.339Z] Copying: 377/1024 [MB] (35 MBps) [2024-12-09T23:17:24.276Z] Copying: 410/1024 [MB] (32 MBps) [2024-12-09T23:17:25.212Z] Copying: 443/1024 [MB] (33 MBps) [2024-12-09T23:17:26.152Z] Copying: 481/1024 [MB] (38 MBps) [2024-12-09T23:17:27.084Z] Copying: 520/1024 [MB] (39 MBps) [2024-12-09T23:17:28.464Z] Copying: 554/1024 [MB] (33 MBps) [2024-12-09T23:17:29.400Z] Copying: 587/1024 [MB] (33 MBps) [2024-12-09T23:17:30.347Z] Copying: 620/1024 [MB] (32 MBps) [2024-12-09T23:17:31.281Z] Copying: 652/1024 [MB] (32 MBps) [2024-12-09T23:17:32.216Z] Copying: 684/1024 [MB] (31 MBps) [2024-12-09T23:17:33.155Z] Copying: 716/1024 [MB] (31 MBps) [2024-12-09T23:17:34.134Z] Copying: 748/1024 [MB] (31 MBps) [2024-12-09T23:17:35.068Z] Copying: 779/1024 [MB] (31 MBps) [2024-12-09T23:17:36.443Z] Copying: 811/1024 [MB] (31 MBps) [2024-12-09T23:17:37.382Z] Copying: 843/1024 [MB] (32 MBps) [2024-12-09T23:17:38.316Z] Copying: 875/1024 [MB] (32 MBps) [2024-12-09T23:17:39.253Z] Copying: 907/1024 [MB] (31 MBps) [2024-12-09T23:17:40.204Z] Copying: 938/1024 [MB] (31 MBps) [2024-12-09T23:17:41.142Z] Copying: 970/1024 [MB] (31 MBps) [2024-12-09T23:17:41.709Z] Copying: 1003/1024 [MB] (32 MBps) [2024-12-09T23:17:42.278Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-12-09 23:17:42.128876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.129077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:14.942 [2024-12-09 23:17:42.129102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:14.942 [2024-12-09 23:17:42.129119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.129156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:14.942 [2024-12-09 23:17:42.134805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.134952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:14.942 [2024-12-09 23:17:42.134969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.626 ms 00:36:14.942 [2024-12-09 23:17:42.134980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.135211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.135239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:14.942 [2024-12-09 23:17:42.135251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:36:14.942 [2024-12-09 23:17:42.135261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.153921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.154026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:14.942 [2024-12-09 23:17:42.154046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.665 ms 00:36:14.942 [2024-12-09 23:17:42.154058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.159120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.159167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:14.942 [2024-12-09 23:17:42.159192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:36:14.942 [2024-12-09 23:17:42.159203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.198034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.198100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:14.942 [2024-12-09 23:17:42.198117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.838 ms 00:36:14.942 [2024-12-09 23:17:42.198128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.219870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.219943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:14.942 [2024-12-09 23:17:42.219962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.715 ms 00:36:14.942 [2024-12-09 23:17:42.219990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.221966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.222010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:14.942 [2024-12-09 23:17:42.222024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:36:14.942 [2024-12-09 23:17:42.222044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.942 [2024-12-09 23:17:42.260757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.942 [2024-12-09 23:17:42.260843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:14.942 [2024-12-09 23:17:42.260861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.752 ms 00:36:14.942 [2024-12-09 23:17:42.260872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.202 [2024-12-09 23:17:42.299557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.202 [2024-12-09 23:17:42.299873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:15.202 [2024-12-09 23:17:42.299903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.666 ms 00:36:15.202 [2024-12-09 23:17:42.299914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.202 [2024-12-09 23:17:42.338112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.202 [2024-12-09 23:17:42.338192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:15.202 [2024-12-09 23:17:42.338209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.185 ms 00:36:15.202 [2024-12-09 23:17:42.338220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.202 [2024-12-09 23:17:42.375978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.202 [2024-12-09 23:17:42.376056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:15.202 [2024-12-09 23:17:42.376073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.679 ms 00:36:15.202 [2024-12-09 23:17:42.376100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.202 [2024-12-09 23:17:42.376171] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:15.202 [2024-12-09 23:17:42.376191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:15.202 [2024-12-09 23:17:42.376205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:36:15.202 [2024-12-09 23:17:42.376217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:15.202 [2024-12-09 23:17:42.376692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.376997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:15.203 [2024-12-09 23:17:42.377341] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:15.203 [2024-12-09 23:17:42.377351] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 916e0778-c8dd-45bc-ac27-6bd810c141dd 00:36:15.203 [2024-12-09 23:17:42.377363] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:36:15.203 [2024-12-09 23:17:42.377373] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263616 00:36:15.203 [2024-12-09 23:17:42.377389] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261632 00:36:15.203 [2024-12-09 23:17:42.377401] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:36:15.203 [2024-12-09 23:17:42.377411] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:15.203 [2024-12-09 23:17:42.377436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:15.203 [2024-12-09 23:17:42.377447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:15.203 [2024-12-09 23:17:42.377467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:15.203 [2024-12-09 23:17:42.377476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:15.203 [2024-12-09 23:17:42.377487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.203 [2024-12-09 23:17:42.377498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:15.203 [2024-12-09 23:17:42.377509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.319 ms 00:36:15.203 [2024-12-09 23:17:42.377520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.203 [2024-12-09 23:17:42.397922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.203 [2024-12-09 23:17:42.397996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:15.203 [2024-12-09 23:17:42.398012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.374 ms 00:36:15.203 [2024-12-09 23:17:42.398024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.203 [2024-12-09 23:17:42.398681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:15.203 [2024-12-09 23:17:42.398696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:15.203 [2024-12-09 23:17:42.398707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:36:15.203 [2024-12-09 23:17:42.398717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.203 [2024-12-09 23:17:42.451098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.203 [2024-12-09 23:17:42.451175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:15.203 [2024-12-09 23:17:42.451191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.203 [2024-12-09 23:17:42.451203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.203 [2024-12-09 23:17:42.451281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.203 [2024-12-09 23:17:42.451294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:15.203 [2024-12-09 23:17:42.451306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.203 [2024-12-09 23:17:42.451316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.203 [2024-12-09 23:17:42.451415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.203 [2024-12-09 23:17:42.451430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:15.203 [2024-12-09 23:17:42.451441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.203 [2024-12-09 23:17:42.451472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.203 [2024-12-09 23:17:42.451493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.203 [2024-12-09 23:17:42.451504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:15.203 [2024-12-09 23:17:42.451515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.203 [2024-12-09 23:17:42.451526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.578808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.578889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:15.463 [2024-12-09 23:17:42.578907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.578934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.687220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.687551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:15.463 [2024-12-09 23:17:42.687581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.687595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.687701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.687723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:15.463 [2024-12-09 23:17:42.687735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.687746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.687792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.687805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:15.463 [2024-12-09 23:17:42.687815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.687826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.687957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.687971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:15.463 [2024-12-09 23:17:42.687988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.687998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.688037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.688051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:15.463 [2024-12-09 23:17:42.688062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.688073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.688112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.688124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:15.463 [2024-12-09 23:17:42.688139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.688150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.688194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:15.463 [2024-12-09 23:17:42.688207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:15.463 [2024-12-09 23:17:42.688218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:15.463 [2024-12-09 23:17:42.688228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:15.463 [2024-12-09 23:17:42.688387] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.366 ms, result 0 00:36:16.840 00:36:16.840 00:36:16.840 23:17:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:18.230 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:18.230 23:17:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:18.489 [2024-12-09 23:17:45.640406] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:36:18.489 [2024-12-09 23:17:45.640566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83293 ] 00:36:18.489 [2024-12-09 23:17:45.822979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.748 [2024-12-09 23:17:45.959522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.006 [2024-12-09 23:17:46.336143] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:19.006 [2024-12-09 23:17:46.336229] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:19.266 [2024-12-09 23:17:46.501416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.501511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:19.266 [2024-12-09 23:17:46.501528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:19.266 [2024-12-09 23:17:46.501541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.501606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.501622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:19.266 [2024-12-09 23:17:46.501633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:36:19.266 [2024-12-09 23:17:46.501643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.501667] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:19.266 [2024-12-09 23:17:46.502702] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:19.266 [2024-12-09 23:17:46.502730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.502742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:19.266 [2024-12-09 23:17:46.502754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:36:19.266 [2024-12-09 23:17:46.502764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.504923] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:19.266 [2024-12-09 23:17:46.525656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.525734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:19.266 [2024-12-09 23:17:46.525752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.764 ms 00:36:19.266 [2024-12-09 23:17:46.525764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.525884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.525899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:19.266 [2024-12-09 23:17:46.525911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:36:19.266 [2024-12-09 23:17:46.525922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.536917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.536974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:19.266 [2024-12-09 23:17:46.536988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.903 ms 00:36:19.266 [2024-12-09 23:17:46.537023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.537121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.537140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:19.266 [2024-12-09 23:17:46.537152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:36:19.266 [2024-12-09 23:17:46.537163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.537243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.537256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:19.266 [2024-12-09 23:17:46.537267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:19.266 [2024-12-09 23:17:46.537277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.537310] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:19.266 [2024-12-09 23:17:46.542239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.542283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:19.266 [2024-12-09 23:17:46.542301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.943 ms 00:36:19.266 [2024-12-09 23:17:46.542311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.542357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.542368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:19.266 [2024-12-09 23:17:46.542379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:36:19.266 [2024-12-09 23:17:46.542390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.542437] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:19.266 [2024-12-09 23:17:46.542490] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:19.266 [2024-12-09 23:17:46.542529] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:19.266 [2024-12-09 23:17:46.542550] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:19.266 [2024-12-09 23:17:46.542641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:19.266 [2024-12-09 23:17:46.542655] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:19.266 [2024-12-09 23:17:46.542669] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:19.266 [2024-12-09 23:17:46.542682] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:19.266 [2024-12-09 23:17:46.542695] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:19.266 [2024-12-09 23:17:46.542706] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:19.266 [2024-12-09 23:17:46.542717] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:19.266 [2024-12-09 23:17:46.542730] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:19.266 [2024-12-09 23:17:46.542740] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:19.266 [2024-12-09 23:17:46.542751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.542761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:19.266 [2024-12-09 23:17:46.542772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:36:19.266 [2024-12-09 23:17:46.542782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.542854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.266 [2024-12-09 23:17:46.542866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:19.266 [2024-12-09 23:17:46.542876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:36:19.266 [2024-12-09 23:17:46.542885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.266 [2024-12-09 23:17:46.542988] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:19.266 [2024-12-09 23:17:46.543004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:19.266 [2024-12-09 23:17:46.543015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:19.266 [2024-12-09 23:17:46.543026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.266 [2024-12-09 23:17:46.543036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:19.266 [2024-12-09 23:17:46.543046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:19.266 [2024-12-09 23:17:46.543055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:19.266 [2024-12-09 23:17:46.543065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:19.266 [2024-12-09 23:17:46.543075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:19.266 [2024-12-09 23:17:46.543084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:19.266 [2024-12-09 23:17:46.543096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:19.266 [2024-12-09 23:17:46.543106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:19.266 [2024-12-09 23:17:46.543116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:19.266 [2024-12-09 23:17:46.543136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:19.266 [2024-12-09 23:17:46.543146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:19.266 [2024-12-09 23:17:46.543155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.266 [2024-12-09 23:17:46.543165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:19.266 [2024-12-09 23:17:46.543174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:19.266 [2024-12-09 23:17:46.543184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:19.267 [2024-12-09 23:17:46.543203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.267 [2024-12-09 23:17:46.543221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:19.267 [2024-12-09 23:17:46.543231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.267 [2024-12-09 23:17:46.543249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:19.267 [2024-12-09 23:17:46.543258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.267 [2024-12-09 23:17:46.543276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:19.267 [2024-12-09 23:17:46.543285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:19.267 [2024-12-09 23:17:46.543303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:19.267 [2024-12-09 23:17:46.543312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:19.267 [2024-12-09 23:17:46.543330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:19.267 [2024-12-09 23:17:46.543338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:19.267 [2024-12-09 23:17:46.543348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:19.267 [2024-12-09 23:17:46.543357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:19.267 [2024-12-09 23:17:46.543366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:19.267 [2024-12-09 23:17:46.543375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:19.267 [2024-12-09 23:17:46.543393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:19.267 [2024-12-09 23:17:46.543403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543412] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:19.267 [2024-12-09 23:17:46.543422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:19.267 [2024-12-09 23:17:46.543431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:19.267 [2024-12-09 23:17:46.543441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:19.267 [2024-12-09 23:17:46.543463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:19.267 [2024-12-09 23:17:46.543473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:19.267 [2024-12-09 23:17:46.543482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:19.267 [2024-12-09 23:17:46.543492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:19.267 [2024-12-09 23:17:46.543502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:19.267 [2024-12-09 23:17:46.543512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:19.267 [2024-12-09 23:17:46.543522] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:19.267 [2024-12-09 23:17:46.543534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:19.267 [2024-12-09 23:17:46.543550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:19.267 [2024-12-09 23:17:46.543561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:19.267 [2024-12-09 23:17:46.543572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:19.267 [2024-12-09 23:17:46.543583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:19.267 [2024-12-09 23:17:46.543593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:19.267 [2024-12-09 23:17:46.543603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:19.267 [2024-12-09 23:17:46.543613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:19.267 [2024-12-09 23:17:46.543624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:19.267 [2024-12-09 23:17:46.543636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:19.267 [2024-12-09 23:17:46.543646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:19.267 [2024-12-09 23:17:46.543657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:19.267 [2024-12-09 23:17:46.543668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:19.267 [2024-12-09 23:17:46.543678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:19.267 [2024-12-09 23:17:46.543689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:19.267 [2024-12-09 23:17:46.543700] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:19.267 [2024-12-09 23:17:46.543711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:19.267 [2024-12-09 23:17:46.543721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:19.267 [2024-12-09 23:17:46.543732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:19.267 [2024-12-09 23:17:46.543742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:19.267 [2024-12-09 23:17:46.543753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:19.267 [2024-12-09 23:17:46.543765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.267 [2024-12-09 23:17:46.543775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:19.267 [2024-12-09 23:17:46.543786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:36:19.267 [2024-12-09 23:17:46.543797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.267 [2024-12-09 23:17:46.588657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.267 [2024-12-09 23:17:46.588731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:19.267 [2024-12-09 23:17:46.588748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.876 ms 00:36:19.267 [2024-12-09 23:17:46.588765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.267 [2024-12-09 23:17:46.588888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.267 [2024-12-09 23:17:46.588900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:19.267 [2024-12-09 23:17:46.588912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:36:19.267 [2024-12-09 23:17:46.588923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.526 [2024-12-09 23:17:46.649294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.526 [2024-12-09 23:17:46.649628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:19.526 [2024-12-09 23:17:46.649657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.374 ms 00:36:19.526 [2024-12-09 23:17:46.649670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.526 [2024-12-09 23:17:46.649742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.526 [2024-12-09 23:17:46.649754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:19.526 [2024-12-09 23:17:46.649775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:19.526 [2024-12-09 23:17:46.649785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.526 [2024-12-09 23:17:46.650667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.526 [2024-12-09 23:17:46.650688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:19.526 [2024-12-09 23:17:46.650699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:36:19.526 [2024-12-09 23:17:46.650710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.526 [2024-12-09 23:17:46.650854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.526 [2024-12-09 23:17:46.650869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:19.526 [2024-12-09 23:17:46.650888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:36:19.526 [2024-12-09 23:17:46.650898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.526 [2024-12-09 23:17:46.670233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.526 [2024-12-09 23:17:46.670305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:19.526 [2024-12-09 23:17:46.670323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.342 ms 00:36:19.527 [2024-12-09 23:17:46.670334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.527 [2024-12-09 23:17:46.690521] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:19.527 [2024-12-09 23:17:46.690604] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:19.527 [2024-12-09 23:17:46.690624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.527 [2024-12-09 23:17:46.690635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:19.527 [2024-12-09 23:17:46.690649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.123 ms 00:36:19.527 [2024-12-09 23:17:46.690660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.527 [2024-12-09 23:17:46.722996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.527 [2024-12-09 23:17:46.723105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:19.527 [2024-12-09 23:17:46.723126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.298 ms 00:36:19.527 [2024-12-09 23:17:46.723139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.527 [2024-12-09 23:17:46.743441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.527 [2024-12-09 23:17:46.743529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:19.527 [2024-12-09 23:17:46.743548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.221 ms 00:36:19.527 [2024-12-09 23:17:46.743558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.527 [2024-12-09 23:17:46.763435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.527 [2024-12-09 23:17:46.763523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:19.527 [2024-12-09 23:17:46.763540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.833 ms 00:36:19.527 [2024-12-09 23:17:46.763551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.527 [2024-12-09 23:17:46.764499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.527 [2024-12-09 23:17:46.764672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:19.527 [2024-12-09 23:17:46.764705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:36:19.527 [2024-12-09 23:17:46.764715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.527 [2024-12-09 23:17:46.856191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.527 [2024-12-09 23:17:46.856503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:19.527 [2024-12-09 23:17:46.856539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.583 ms 00:36:19.527 [2024-12-09 23:17:46.856552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.786 [2024-12-09 23:17:46.870310] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:19.786 [2024-12-09 23:17:46.873575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.786 [2024-12-09 23:17:46.873624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:19.786 [2024-12-09 23:17:46.873642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.933 ms 00:36:19.786 [2024-12-09 23:17:46.873653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.786 [2024-12-09 23:17:46.873780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.786 [2024-12-09 23:17:46.873794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:19.786 [2024-12-09 23:17:46.873812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:36:19.786 [2024-12-09 23:17:46.873823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.786 [2024-12-09 23:17:46.874937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.786 [2024-12-09 23:17:46.874968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:19.786 [2024-12-09 23:17:46.874979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.042 ms 00:36:19.786 [2024-12-09 23:17:46.874989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.786 [2024-12-09 23:17:46.875021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.786 [2024-12-09 23:17:46.875033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:19.786 [2024-12-09 23:17:46.875044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:19.786 [2024-12-09 23:17:46.875054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.786 [2024-12-09 23:17:46.875095] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:19.786 [2024-12-09 23:17:46.875120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.786 [2024-12-09 23:17:46.875131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:19.786 [2024-12-09 23:17:46.875143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:36:19.786 [2024-12-09 23:17:46.875152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.786 [2024-12-09 23:17:46.914279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.786 [2024-12-09 23:17:46.914354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:19.786 [2024-12-09 23:17:46.914381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.166 ms 00:36:19.786 [2024-12-09 23:17:46.914392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.786 [2024-12-09 23:17:46.914533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:19.787 [2024-12-09 23:17:46.914548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:19.787 [2024-12-09 23:17:46.914560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:36:19.787 [2024-12-09 23:17:46.914589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:19.787 [2024-12-09 23:17:46.916179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 414.936 ms, result 0 00:36:21.160  [2024-12-09T23:17:49.434Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-09T23:17:50.369Z] Copying: 53/1024 [MB] (27 MBps) [2024-12-09T23:17:51.304Z] Copying: 80/1024 [MB] (26 MBps) [2024-12-09T23:17:52.256Z] Copying: 107/1024 [MB] (26 MBps) [2024-12-09T23:17:53.190Z] Copying: 134/1024 [MB] (27 MBps) [2024-12-09T23:17:54.134Z] Copying: 161/1024 [MB] (27 MBps) [2024-12-09T23:17:55.546Z] Copying: 188/1024 [MB] (27 MBps) [2024-12-09T23:17:56.481Z] Copying: 215/1024 [MB] (27 MBps) [2024-12-09T23:17:57.420Z] Copying: 242/1024 [MB] (26 MBps) [2024-12-09T23:17:58.355Z] Copying: 268/1024 [MB] (26 MBps) [2024-12-09T23:17:59.289Z] Copying: 295/1024 [MB] (26 MBps) [2024-12-09T23:18:00.225Z] Copying: 322/1024 [MB] (27 MBps) [2024-12-09T23:18:01.165Z] Copying: 349/1024 [MB] (27 MBps) [2024-12-09T23:18:02.540Z] Copying: 377/1024 [MB] (27 MBps) [2024-12-09T23:18:03.475Z] Copying: 404/1024 [MB] (27 MBps) [2024-12-09T23:18:04.409Z] Copying: 431/1024 [MB] (27 MBps) [2024-12-09T23:18:05.342Z] Copying: 459/1024 [MB] (28 MBps) [2024-12-09T23:18:06.274Z] Copying: 489/1024 [MB] (29 MBps) [2024-12-09T23:18:07.208Z] Copying: 516/1024 [MB] (26 MBps) [2024-12-09T23:18:08.140Z] Copying: 543/1024 [MB] (27 MBps) [2024-12-09T23:18:09.515Z] Copying: 570/1024 [MB] (26 MBps) [2024-12-09T23:18:10.451Z] Copying: 596/1024 [MB] (26 MBps) [2024-12-09T23:18:11.387Z] Copying: 623/1024 [MB] (26 MBps) [2024-12-09T23:18:12.323Z] Copying: 651/1024 [MB] (27 MBps) [2024-12-09T23:18:13.260Z] Copying: 677/1024 [MB] (26 MBps) [2024-12-09T23:18:14.194Z] Copying: 703/1024 [MB] (25 MBps) [2024-12-09T23:18:15.128Z] Copying: 729/1024 [MB] (26 MBps) [2024-12-09T23:18:16.512Z] Copying: 756/1024 [MB] (26 MBps) [2024-12-09T23:18:17.448Z] Copying: 783/1024 [MB] (26 MBps) [2024-12-09T23:18:18.389Z] Copying: 809/1024 [MB] (26 MBps) [2024-12-09T23:18:19.327Z] Copying: 836/1024 [MB] (26 MBps) [2024-12-09T23:18:20.318Z] Copying: 863/1024 [MB] (27 MBps) [2024-12-09T23:18:21.256Z] Copying: 890/1024 [MB] (27 MBps) [2024-12-09T23:18:22.191Z] Copying: 916/1024 [MB] (25 MBps) [2024-12-09T23:18:23.182Z] Copying: 942/1024 [MB] (25 MBps) [2024-12-09T23:18:24.116Z] Copying: 968/1024 [MB] (26 MBps) [2024-12-09T23:18:25.492Z] Copying: 995/1024 [MB] (26 MBps) [2024-12-09T23:18:25.492Z] Copying: 1022/1024 [MB] (27 MBps) [2024-12-09T23:18:25.492Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-09 23:18:25.264875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.264964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:58.156 [2024-12-09 23:18:25.264987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:58.156 [2024-12-09 23:18:25.265002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.265034] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:58.156 [2024-12-09 23:18:25.271073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.271331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:58.156 [2024-12-09 23:18:25.271367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.021 ms 00:36:58.156 [2024-12-09 23:18:25.271382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.271700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.271720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:58.156 [2024-12-09 23:18:25.271734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:36:58.156 [2024-12-09 23:18:25.271749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.275498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.275531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:58.156 [2024-12-09 23:18:25.275546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.734 ms 00:36:58.156 [2024-12-09 23:18:25.275568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.281605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.281796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:58.156 [2024-12-09 23:18:25.281819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.020 ms 00:36:58.156 [2024-12-09 23:18:25.281830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.321412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.321497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:58.156 [2024-12-09 23:18:25.321514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.545 ms 00:36:58.156 [2024-12-09 23:18:25.321541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.344164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.344504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:58.156 [2024-12-09 23:18:25.344533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.584 ms 00:36:58.156 [2024-12-09 23:18:25.344545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.346690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.346738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:58.156 [2024-12-09 23:18:25.346752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:36:58.156 [2024-12-09 23:18:25.346763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.386836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.386911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:58.156 [2024-12-09 23:18:25.386928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.115 ms 00:36:58.156 [2024-12-09 23:18:25.386939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.426314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.426396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:58.156 [2024-12-09 23:18:25.426414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.361 ms 00:36:58.156 [2024-12-09 23:18:25.426425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.156 [2024-12-09 23:18:25.465764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.156 [2024-12-09 23:18:25.465845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:58.156 [2024-12-09 23:18:25.465863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.296 ms 00:36:58.156 [2024-12-09 23:18:25.465875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.416 [2024-12-09 23:18:25.504971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.416 [2024-12-09 23:18:25.505048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:58.416 [2024-12-09 23:18:25.505065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.000 ms 00:36:58.416 [2024-12-09 23:18:25.505076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.416 [2024-12-09 23:18:25.505150] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:58.416 [2024-12-09 23:18:25.505181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:58.416 [2024-12-09 23:18:25.505200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:36:58.416 [2024-12-09 23:18:25.505213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:58.416 [2024-12-09 23:18:25.505839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.505995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:58.417 [2024-12-09 23:18:25.506317] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:58.417 [2024-12-09 23:18:25.506327] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 916e0778-c8dd-45bc-ac27-6bd810c141dd 00:36:58.417 [2024-12-09 23:18:25.506338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:36:58.417 [2024-12-09 23:18:25.506349] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:58.417 [2024-12-09 23:18:25.506359] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:58.417 [2024-12-09 23:18:25.506370] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:58.417 [2024-12-09 23:18:25.506394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:58.417 [2024-12-09 23:18:25.506404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:58.417 [2024-12-09 23:18:25.506414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:58.417 [2024-12-09 23:18:25.506424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:58.417 [2024-12-09 23:18:25.506434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:58.417 [2024-12-09 23:18:25.506445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.417 [2024-12-09 23:18:25.506465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:58.417 [2024-12-09 23:18:25.506484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:36:58.417 [2024-12-09 23:18:25.506498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.417 [2024-12-09 23:18:25.527271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.417 [2024-12-09 23:18:25.527340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:58.417 [2024-12-09 23:18:25.527356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.746 ms 00:36:58.417 [2024-12-09 23:18:25.527367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.417 [2024-12-09 23:18:25.527986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:58.417 [2024-12-09 23:18:25.528013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:58.417 [2024-12-09 23:18:25.528025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:36:58.417 [2024-12-09 23:18:25.528036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.417 [2024-12-09 23:18:25.579642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.417 [2024-12-09 23:18:25.579705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:58.417 [2024-12-09 23:18:25.579721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.417 [2024-12-09 23:18:25.579732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.417 [2024-12-09 23:18:25.579813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.417 [2024-12-09 23:18:25.579831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:58.417 [2024-12-09 23:18:25.579843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.417 [2024-12-09 23:18:25.579853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.417 [2024-12-09 23:18:25.579938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.417 [2024-12-09 23:18:25.579952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:58.417 [2024-12-09 23:18:25.579964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.417 [2024-12-09 23:18:25.579974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.417 [2024-12-09 23:18:25.579992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.417 [2024-12-09 23:18:25.580003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:58.417 [2024-12-09 23:18:25.580018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.417 [2024-12-09 23:18:25.580029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.417 [2024-12-09 23:18:25.707223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.417 [2024-12-09 23:18:25.707309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:58.417 [2024-12-09 23:18:25.707326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.417 [2024-12-09 23:18:25.707337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.816646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.676 [2024-12-09 23:18:25.816974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:58.676 [2024-12-09 23:18:25.817001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.676 [2024-12-09 23:18:25.817013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.817123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.676 [2024-12-09 23:18:25.817135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:58.676 [2024-12-09 23:18:25.817147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.676 [2024-12-09 23:18:25.817159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.817205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.676 [2024-12-09 23:18:25.817217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:58.676 [2024-12-09 23:18:25.817228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.676 [2024-12-09 23:18:25.817246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.817387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.676 [2024-12-09 23:18:25.817402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:58.676 [2024-12-09 23:18:25.817413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.676 [2024-12-09 23:18:25.817424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.817497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.676 [2024-12-09 23:18:25.817510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:58.676 [2024-12-09 23:18:25.817521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.676 [2024-12-09 23:18:25.817532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.817584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.676 [2024-12-09 23:18:25.817596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:58.676 [2024-12-09 23:18:25.817607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.676 [2024-12-09 23:18:25.817617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.817661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:58.676 [2024-12-09 23:18:25.817672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:58.676 [2024-12-09 23:18:25.817683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:58.676 [2024-12-09 23:18:25.817697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:58.676 [2024-12-09 23:18:25.817819] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.815 ms, result 0 00:36:59.625 00:36:59.625 00:36:59.625 23:18:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:37:01.524 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:37:01.524 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:37:01.524 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:37:01.524 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:01.524 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:37:01.783 23:18:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:37:01.783 23:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:01.783 23:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:37:01.783 Process with pid 81519 is not found 00:37:01.783 23:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81519 00:37:01.783 23:18:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81519 ']' 00:37:01.783 23:18:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81519 00:37:01.783 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81519) - No such process 00:37:01.783 23:18:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81519 is not found' 00:37:01.783 23:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:37:02.042 Remove shared memory files 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:37:02.042 ************************************ 00:37:02.042 END TEST ftl_dirty_shutdown 00:37:02.042 ************************************ 00:37:02.042 00:37:02.042 real 3m35.527s 00:37:02.042 user 4m1.817s 00:37:02.042 sys 0m39.288s 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.042 23:18:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:02.299 23:18:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:37:02.299 23:18:29 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:02.299 23:18:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.299 23:18:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:02.299 ************************************ 00:37:02.299 START TEST ftl_upgrade_shutdown 00:37:02.299 ************************************ 00:37:02.299 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:37:02.299 * Looking for test storage... 00:37:02.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:02.299 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:02.299 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:37:02.299 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:02.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.557 --rc genhtml_branch_coverage=1 00:37:02.557 --rc genhtml_function_coverage=1 00:37:02.557 --rc genhtml_legend=1 00:37:02.557 --rc geninfo_all_blocks=1 00:37:02.557 --rc geninfo_unexecuted_blocks=1 00:37:02.557 00:37:02.557 ' 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:02.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.557 --rc genhtml_branch_coverage=1 00:37:02.557 --rc genhtml_function_coverage=1 00:37:02.557 --rc genhtml_legend=1 00:37:02.557 --rc geninfo_all_blocks=1 00:37:02.557 --rc geninfo_unexecuted_blocks=1 00:37:02.557 00:37:02.557 ' 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:02.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.557 --rc genhtml_branch_coverage=1 00:37:02.557 --rc genhtml_function_coverage=1 00:37:02.557 --rc genhtml_legend=1 00:37:02.557 --rc geninfo_all_blocks=1 00:37:02.557 --rc geninfo_unexecuted_blocks=1 00:37:02.557 00:37:02.557 ' 00:37:02.557 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:02.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.558 --rc genhtml_branch_coverage=1 00:37:02.558 --rc genhtml_function_coverage=1 00:37:02.558 --rc genhtml_legend=1 00:37:02.558 --rc geninfo_all_blocks=1 00:37:02.558 --rc geninfo_unexecuted_blocks=1 00:37:02.558 00:37:02.558 ' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83800 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83800 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83800 ']' 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:02.558 23:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:02.558 [2024-12-09 23:18:29.823016] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:37:02.558 [2024-12-09 23:18:29.823166] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83800 ] 00:37:02.817 [2024-12-09 23:18:29.992237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.817 [2024-12-09 23:18:30.126970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:37:03.778 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:37:04.037 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:37:04.037 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:37:04.037 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:04.037 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:37:04.037 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:37:04.037 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:04.295 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:04.554 { 00:37:04.554 "name": "basen1", 00:37:04.554 "aliases": [ 00:37:04.554 "0ad890c7-be4f-4b70-87a0-41e063aed8f3" 00:37:04.554 ], 00:37:04.554 "product_name": "NVMe disk", 00:37:04.554 "block_size": 4096, 00:37:04.554 "num_blocks": 1310720, 00:37:04.554 "uuid": "0ad890c7-be4f-4b70-87a0-41e063aed8f3", 00:37:04.554 "numa_id": -1, 00:37:04.554 "assigned_rate_limits": { 00:37:04.554 "rw_ios_per_sec": 0, 00:37:04.554 "rw_mbytes_per_sec": 0, 00:37:04.554 "r_mbytes_per_sec": 0, 00:37:04.554 "w_mbytes_per_sec": 0 00:37:04.554 }, 00:37:04.554 "claimed": true, 00:37:04.554 "claim_type": "read_many_write_one", 00:37:04.554 "zoned": false, 00:37:04.554 "supported_io_types": { 00:37:04.554 "read": true, 00:37:04.554 "write": true, 00:37:04.554 "unmap": true, 00:37:04.554 "flush": true, 00:37:04.554 "reset": true, 00:37:04.554 "nvme_admin": true, 00:37:04.554 "nvme_io": true, 00:37:04.554 "nvme_io_md": false, 00:37:04.554 "write_zeroes": true, 00:37:04.554 "zcopy": false, 00:37:04.554 "get_zone_info": false, 00:37:04.554 "zone_management": false, 00:37:04.554 "zone_append": false, 00:37:04.554 "compare": true, 00:37:04.554 "compare_and_write": false, 00:37:04.554 "abort": true, 00:37:04.554 "seek_hole": false, 00:37:04.554 "seek_data": false, 00:37:04.554 "copy": true, 00:37:04.554 "nvme_iov_md": false 00:37:04.554 }, 00:37:04.554 "driver_specific": { 00:37:04.554 "nvme": [ 00:37:04.554 { 00:37:04.554 "pci_address": "0000:00:11.0", 00:37:04.554 "trid": { 00:37:04.554 "trtype": "PCIe", 00:37:04.554 "traddr": "0000:00:11.0" 00:37:04.554 }, 00:37:04.554 "ctrlr_data": { 00:37:04.554 "cntlid": 0, 00:37:04.554 "vendor_id": "0x1b36", 00:37:04.554 "model_number": "QEMU NVMe Ctrl", 00:37:04.554 "serial_number": "12341", 00:37:04.554 "firmware_revision": "8.0.0", 00:37:04.554 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:04.554 "oacs": { 00:37:04.554 "security": 0, 00:37:04.554 "format": 1, 00:37:04.554 "firmware": 0, 00:37:04.554 "ns_manage": 1 00:37:04.554 }, 00:37:04.554 "multi_ctrlr": false, 00:37:04.554 "ana_reporting": false 00:37:04.554 }, 00:37:04.554 "vs": { 00:37:04.554 "nvme_version": "1.4" 00:37:04.554 }, 00:37:04.554 "ns_data": { 00:37:04.554 "id": 1, 00:37:04.554 "can_share": false 00:37:04.554 } 00:37:04.554 } 00:37:04.554 ], 00:37:04.554 "mp_policy": "active_passive" 00:37:04.554 } 00:37:04.554 } 00:37:04.554 ]' 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:04.554 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:04.817 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=305afbe7-467f-4732-a943-bbdb4cc3250c 00:37:04.817 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:37:04.817 23:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 305afbe7-467f-4732-a943-bbdb4cc3250c 00:37:05.077 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:37:05.335 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=568dcebd-1922-445a-ad60-d8b534097aa3 00:37:05.335 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 568dcebd-1922-445a-ad60-d8b534097aa3 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ca39566f-163b-47a5-b305-5c5cf6f53697 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ca39566f-163b-47a5-b305-5c5cf6f53697 ]] 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ca39566f-163b-47a5-b305-5c5cf6f53697 5120 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ca39566f-163b-47a5-b305-5c5cf6f53697 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ca39566f-163b-47a5-b305-5c5cf6f53697 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ca39566f-163b-47a5-b305-5c5cf6f53697 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:37:05.594 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ca39566f-163b-47a5-b305-5c5cf6f53697 00:37:05.853 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:05.853 { 00:37:05.853 "name": "ca39566f-163b-47a5-b305-5c5cf6f53697", 00:37:05.853 "aliases": [ 00:37:05.853 "lvs/basen1p0" 00:37:05.853 ], 00:37:05.853 "product_name": "Logical Volume", 00:37:05.853 "block_size": 4096, 00:37:05.853 "num_blocks": 5242880, 00:37:05.853 "uuid": "ca39566f-163b-47a5-b305-5c5cf6f53697", 00:37:05.853 "assigned_rate_limits": { 00:37:05.853 "rw_ios_per_sec": 0, 00:37:05.853 "rw_mbytes_per_sec": 0, 00:37:05.853 "r_mbytes_per_sec": 0, 00:37:05.853 "w_mbytes_per_sec": 0 00:37:05.853 }, 00:37:05.853 "claimed": false, 00:37:05.853 "zoned": false, 00:37:05.853 "supported_io_types": { 00:37:05.853 "read": true, 00:37:05.853 "write": true, 00:37:05.853 "unmap": true, 00:37:05.853 "flush": false, 00:37:05.853 "reset": true, 00:37:05.853 "nvme_admin": false, 00:37:05.853 "nvme_io": false, 00:37:05.853 "nvme_io_md": false, 00:37:05.853 "write_zeroes": true, 00:37:05.853 "zcopy": false, 00:37:05.853 "get_zone_info": false, 00:37:05.853 "zone_management": false, 00:37:05.853 "zone_append": false, 00:37:05.853 "compare": false, 00:37:05.853 "compare_and_write": false, 00:37:05.853 "abort": false, 00:37:05.853 "seek_hole": true, 00:37:05.853 "seek_data": true, 00:37:05.853 "copy": false, 00:37:05.853 "nvme_iov_md": false 00:37:05.853 }, 00:37:05.853 "driver_specific": { 00:37:05.853 "lvol": { 00:37:05.853 "lvol_store_uuid": "568dcebd-1922-445a-ad60-d8b534097aa3", 00:37:05.853 "base_bdev": "basen1", 00:37:05.853 "thin_provision": true, 00:37:05.853 "num_allocated_clusters": 0, 00:37:05.853 "snapshot": false, 00:37:05.853 "clone": false, 00:37:05.853 "esnap_clone": false 00:37:05.853 } 00:37:05.853 } 00:37:05.853 } 00:37:05.853 ]' 00:37:05.853 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:05.853 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:37:05.853 23:18:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:05.853 23:18:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:37:05.853 23:18:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:37:05.853 23:18:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:37:05.853 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:37:05.853 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:37:05.853 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:37:06.112 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:37:06.112 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:37:06.112 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:37:06.371 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:37:06.371 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:37:06.371 23:18:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ca39566f-163b-47a5-b305-5c5cf6f53697 -c cachen1p0 --l2p_dram_limit 2 00:37:06.631 [2024-12-09 23:18:33.733079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.733159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:06.631 [2024-12-09 23:18:33.733180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:37:06.631 [2024-12-09 23:18:33.733192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.733275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.733288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:06.631 [2024-12-09 23:18:33.733302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:37:06.631 [2024-12-09 23:18:33.733314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.733339] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:06.631 [2024-12-09 23:18:33.734501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:06.631 [2024-12-09 23:18:33.734552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.734565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:06.631 [2024-12-09 23:18:33.734579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.215 ms 00:37:06.631 [2024-12-09 23:18:33.734590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.734686] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d2578713-fbfc-46d1-a4ec-463d87c2c101 00:37:06.631 [2024-12-09 23:18:33.736863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.737056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:37:06.631 [2024-12-09 23:18:33.737081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:37:06.631 [2024-12-09 23:18:33.737095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.750293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.750570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:06.631 [2024-12-09 23:18:33.750732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.120 ms 00:37:06.631 [2024-12-09 23:18:33.750778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.750879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.751011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:06.631 [2024-12-09 23:18:33.751098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:37:06.631 [2024-12-09 23:18:33.751134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.751247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.751288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:06.631 [2024-12-09 23:18:33.751547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:37:06.631 [2024-12-09 23:18:33.751598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.751661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:06.631 [2024-12-09 23:18:33.757781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.757946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:06.631 [2024-12-09 23:18:33.758079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.139 ms 00:37:06.631 [2024-12-09 23:18:33.758118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.758234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.758272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:06.631 [2024-12-09 23:18:33.758306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:06.631 [2024-12-09 23:18:33.758374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.758480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:37:06.631 [2024-12-09 23:18:33.758654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:06.631 [2024-12-09 23:18:33.758717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:06.631 [2024-12-09 23:18:33.758819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:37:06.631 [2024-12-09 23:18:33.758877] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:06.631 [2024-12-09 23:18:33.758974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:06.631 [2024-12-09 23:18:33.759035] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:06.631 [2024-12-09 23:18:33.759066] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:06.631 [2024-12-09 23:18:33.759136] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:06.631 [2024-12-09 23:18:33.759171] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:06.631 [2024-12-09 23:18:33.759247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.759282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:06.631 [2024-12-09 23:18:33.759316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.779 ms 00:37:06.631 [2024-12-09 23:18:33.759379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.759505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.631 [2024-12-09 23:18:33.759588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:06.631 [2024-12-09 23:18:33.759661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:37:06.631 [2024-12-09 23:18:33.759691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.631 [2024-12-09 23:18:33.759809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:06.631 [2024-12-09 23:18:33.759843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:06.631 [2024-12-09 23:18:33.760031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:06.631 [2024-12-09 23:18:33.760067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.631 [2024-12-09 23:18:33.760101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:06.631 [2024-12-09 23:18:33.760131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:06.631 [2024-12-09 23:18:33.760210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:06.631 [2024-12-09 23:18:33.760245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:06.631 [2024-12-09 23:18:33.760280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:06.631 [2024-12-09 23:18:33.760310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.631 [2024-12-09 23:18:33.760386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:06.631 [2024-12-09 23:18:33.760468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:06.631 [2024-12-09 23:18:33.760544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.631 [2024-12-09 23:18:33.760581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:06.631 [2024-12-09 23:18:33.760615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:06.631 [2024-12-09 23:18:33.760645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.631 [2024-12-09 23:18:33.760681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:06.631 [2024-12-09 23:18:33.760755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:06.631 [2024-12-09 23:18:33.760793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.631 [2024-12-09 23:18:33.760824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:06.631 [2024-12-09 23:18:33.760856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:06.632 [2024-12-09 23:18:33.760885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:06.632 [2024-12-09 23:18:33.760948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:06.632 [2024-12-09 23:18:33.760982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:06.632 [2024-12-09 23:18:33.761015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:06.632 [2024-12-09 23:18:33.761044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:06.632 [2024-12-09 23:18:33.761076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:06.632 [2024-12-09 23:18:33.761106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:06.632 [2024-12-09 23:18:33.761172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:06.632 [2024-12-09 23:18:33.761258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:06.632 [2024-12-09 23:18:33.761336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:06.632 [2024-12-09 23:18:33.761370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:06.632 [2024-12-09 23:18:33.761406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:06.632 [2024-12-09 23:18:33.761487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.632 [2024-12-09 23:18:33.761528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:06.632 [2024-12-09 23:18:33.761558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:06.632 [2024-12-09 23:18:33.761627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.632 [2024-12-09 23:18:33.761641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:06.632 [2024-12-09 23:18:33.761654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:06.632 [2024-12-09 23:18:33.761664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.632 [2024-12-09 23:18:33.761676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:06.632 [2024-12-09 23:18:33.761685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:06.632 [2024-12-09 23:18:33.761697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.632 [2024-12-09 23:18:33.761706] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:06.632 [2024-12-09 23:18:33.761719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:06.632 [2024-12-09 23:18:33.761729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:06.632 [2024-12-09 23:18:33.761741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:06.632 [2024-12-09 23:18:33.761752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:06.632 [2024-12-09 23:18:33.761768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:06.632 [2024-12-09 23:18:33.761778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:06.632 [2024-12-09 23:18:33.761790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:06.632 [2024-12-09 23:18:33.761799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:06.632 [2024-12-09 23:18:33.761812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:06.632 [2024-12-09 23:18:33.761825] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:06.632 [2024-12-09 23:18:33.761846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.761858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:06.632 [2024-12-09 23:18:33.761872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.761884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.761898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:06.632 [2024-12-09 23:18:33.761909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:06.632 [2024-12-09 23:18:33.761924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:06.632 [2024-12-09 23:18:33.761935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:06.632 [2024-12-09 23:18:33.761949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.761960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.761977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.761987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.762000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.762012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.762025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:06.632 [2024-12-09 23:18:33.762035] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:06.632 [2024-12-09 23:18:33.762049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.762060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:06.632 [2024-12-09 23:18:33.762074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:06.632 [2024-12-09 23:18:33.762084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:06.632 [2024-12-09 23:18:33.762097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:06.632 [2024-12-09 23:18:33.762110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:06.632 [2024-12-09 23:18:33.762124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:06.632 [2024-12-09 23:18:33.762135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.364 ms 00:37:06.632 [2024-12-09 23:18:33.762148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:06.632 [2024-12-09 23:18:33.762227] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:37:06.632 [2024-12-09 23:18:33.762247] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:37:09.920 [2024-12-09 23:18:37.159974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:09.920 [2024-12-09 23:18:37.160350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:37:09.920 [2024-12-09 23:18:37.160463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3403.261 ms 00:37:09.920 [2024-12-09 23:18:37.160511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:09.920 [2024-12-09 23:18:37.207464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:09.920 [2024-12-09 23:18:37.207817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:09.920 [2024-12-09 23:18:37.207944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.658 ms 00:37:09.920 [2024-12-09 23:18:37.207990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:09.920 [2024-12-09 23:18:37.208120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:09.920 [2024-12-09 23:18:37.208137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:09.920 [2024-12-09 23:18:37.208150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:37:09.920 [2024-12-09 23:18:37.208188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.257047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.257119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:10.179 [2024-12-09 23:18:37.257136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 48.870 ms 00:37:10.179 [2024-12-09 23:18:37.257150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.257207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.257227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:10.179 [2024-12-09 23:18:37.257238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:37:10.179 [2024-12-09 23:18:37.257251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.257804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.257827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:10.179 [2024-12-09 23:18:37.257851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.460 ms 00:37:10.179 [2024-12-09 23:18:37.257865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.257911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.257926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:10.179 [2024-12-09 23:18:37.257940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:37:10.179 [2024-12-09 23:18:37.257956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.279772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.280070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:10.179 [2024-12-09 23:18:37.280100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.826 ms 00:37:10.179 [2024-12-09 23:18:37.280115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.304676] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:10.179 [2024-12-09 23:18:37.306140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.306348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:10.179 [2024-12-09 23:18:37.306385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.922 ms 00:37:10.179 [2024-12-09 23:18:37.306397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.339084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.339165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:37:10.179 [2024-12-09 23:18:37.339203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.641 ms 00:37:10.179 [2024-12-09 23:18:37.339215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.339314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.339332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:10.179 [2024-12-09 23:18:37.339352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:37:10.179 [2024-12-09 23:18:37.339363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.376992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.377249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:37:10.179 [2024-12-09 23:18:37.377281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.602 ms 00:37:10.179 [2024-12-09 23:18:37.377293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.415160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.415235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:37:10.179 [2024-12-09 23:18:37.415256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.856 ms 00:37:10.179 [2024-12-09 23:18:37.415283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.179 [2024-12-09 23:18:37.416042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.179 [2024-12-09 23:18:37.416060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:10.179 [2024-12-09 23:18:37.416076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.709 ms 00:37:10.179 [2024-12-09 23:18:37.416092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.443 [2024-12-09 23:18:37.520582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.443 [2024-12-09 23:18:37.520665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:37:10.443 [2024-12-09 23:18:37.520710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 104.542 ms 00:37:10.443 [2024-12-09 23:18:37.520722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.443 [2024-12-09 23:18:37.561367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.443 [2024-12-09 23:18:37.561447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:37:10.443 [2024-12-09 23:18:37.561506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.546 ms 00:37:10.443 [2024-12-09 23:18:37.561518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.443 [2024-12-09 23:18:37.601223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.443 [2024-12-09 23:18:37.601300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:37:10.443 [2024-12-09 23:18:37.601321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.682 ms 00:37:10.443 [2024-12-09 23:18:37.601348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.443 [2024-12-09 23:18:37.639181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.443 [2024-12-09 23:18:37.639255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:10.443 [2024-12-09 23:18:37.639291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.812 ms 00:37:10.443 [2024-12-09 23:18:37.639303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.443 [2024-12-09 23:18:37.639386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.443 [2024-12-09 23:18:37.639400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:10.443 [2024-12-09 23:18:37.639420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:37:10.443 [2024-12-09 23:18:37.639431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.443 [2024-12-09 23:18:37.639581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:10.443 [2024-12-09 23:18:37.639602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:10.443 [2024-12-09 23:18:37.639616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:37:10.443 [2024-12-09 23:18:37.639627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:10.443 [2024-12-09 23:18:37.641231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3913.826 ms, result 0 00:37:10.443 { 00:37:10.443 "name": "ftl", 00:37:10.443 "uuid": "d2578713-fbfc-46d1-a4ec-463d87c2c101" 00:37:10.443 } 00:37:10.443 23:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:37:10.704 [2024-12-09 23:18:37.871493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:10.704 23:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:37:10.963 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:37:10.963 [2024-12-09 23:18:38.291194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:11.222 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:37:11.222 [2024-12-09 23:18:38.493101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:11.222 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:37:11.791 Fill FTL, iteration 1 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83923 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83923 /var/tmp/spdk.tgt.sock 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83923 ']' 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:37:11.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.791 23:18:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:11.791 [2024-12-09 23:18:38.982400] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:37:11.791 [2024-12-09 23:18:38.982592] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83923 ] 00:37:12.050 [2024-12-09 23:18:39.163579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.050 [2024-12-09 23:18:39.299353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.983 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:12.983 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:12.983 23:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:37:13.551 ftln1 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83923 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83923 ']' 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83923 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:37:13.551 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:13.552 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83923 00:37:13.809 killing process with pid 83923 00:37:13.809 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:13.809 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:13.809 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83923' 00:37:13.809 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83923 00:37:13.809 23:18:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83923 00:37:16.338 23:18:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:37:16.338 23:18:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:37:16.338 [2024-12-09 23:18:43.502438] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:37:16.338 [2024-12-09 23:18:43.502620] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83987 ] 00:37:16.600 [2024-12-09 23:18:43.690078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.600 [2024-12-09 23:18:43.828231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.508  [2024-12-09T23:18:46.414Z] Copying: 240/1024 [MB] (240 MBps) [2024-12-09T23:18:47.351Z] Copying: 482/1024 [MB] (242 MBps) [2024-12-09T23:18:48.735Z] Copying: 725/1024 [MB] (243 MBps) [2024-12-09T23:18:48.735Z] Copying: 968/1024 [MB] (243 MBps) [2024-12-09T23:18:50.113Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:37:22.777 00:37:22.777 Calculate MD5 checksum, iteration 1 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:22.777 23:18:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:22.777 [2024-12-09 23:18:49.966826] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:37:22.777 [2024-12-09 23:18:49.967215] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84051 ] 00:37:23.036 [2024-12-09 23:18:50.149261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.036 [2024-12-09 23:18:50.286398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.967  [2024-12-09T23:18:52.561Z] Copying: 673/1024 [MB] (673 MBps) [2024-12-09T23:18:53.496Z] Copying: 1024/1024 [MB] (average 673 MBps) 00:37:26.160 00:37:26.160 23:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:37:26.160 23:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:28.063 Fill FTL, iteration 2 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=397b9e88e6cdbc22dacf52f3db1916bb 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:28.063 23:18:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:37:28.063 [2024-12-09 23:18:55.162629] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:37:28.063 [2024-12-09 23:18:55.163012] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84107 ] 00:37:28.063 [2024-12-09 23:18:55.340609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.321 [2024-12-09 23:18:55.486996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:29.742  [2024-12-09T23:18:58.014Z] Copying: 243/1024 [MB] (243 MBps) [2024-12-09T23:18:59.400Z] Copying: 485/1024 [MB] (242 MBps) [2024-12-09T23:19:00.336Z] Copying: 729/1024 [MB] (244 MBps) [2024-12-09T23:19:00.336Z] Copying: 969/1024 [MB] (240 MBps) [2024-12-09T23:19:01.712Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:37:34.376 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:37:34.376 Calculate MD5 checksum, iteration 2 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:34.376 23:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:37:34.376 [2024-12-09 23:19:01.591252] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:37:34.376 [2024-12-09 23:19:01.591693] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84171 ] 00:37:34.635 [2024-12-09 23:19:01.771337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.635 [2024-12-09 23:19:01.918085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.539  [2024-12-09T23:19:04.134Z] Copying: 692/1024 [MB] (692 MBps) [2024-12-09T23:19:06.052Z] Copying: 1024/1024 [MB] (average 697 MBps) 00:37:38.716 00:37:38.716 23:19:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:37:38.716 23:19:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:37:40.099 23:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:37:40.099 23:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=dbfc562879c84584b9d06025d0ca5249 00:37:40.099 23:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:37:40.099 23:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:37:40.099 23:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:40.358 [2024-12-09 23:19:07.526963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:40.358 [2024-12-09 23:19:07.527038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:40.358 [2024-12-09 23:19:07.527057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:40.358 [2024-12-09 23:19:07.527069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:40.358 [2024-12-09 23:19:07.527101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:40.358 [2024-12-09 23:19:07.527118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:40.358 [2024-12-09 23:19:07.527130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:40.358 [2024-12-09 23:19:07.527141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:40.358 [2024-12-09 23:19:07.527163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:40.358 [2024-12-09 23:19:07.527174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:40.358 [2024-12-09 23:19:07.527185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:40.358 [2024-12-09 23:19:07.527195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:40.358 [2024-12-09 23:19:07.527263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.292 ms, result 0 00:37:40.358 true 00:37:40.358 23:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:40.616 { 00:37:40.617 "name": "ftl", 00:37:40.617 "properties": [ 00:37:40.617 { 00:37:40.617 "name": "superblock_version", 00:37:40.617 "value": 5, 00:37:40.617 "read-only": true 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "name": "base_device", 00:37:40.617 "bands": [ 00:37:40.617 { 00:37:40.617 "id": 0, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 1, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 2, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 3, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 4, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 5, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 6, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 7, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 8, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 9, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 10, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 11, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 12, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 13, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 14, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 15, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 16, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 17, 00:37:40.617 "state": "FREE", 00:37:40.617 "validity": 0.0 00:37:40.617 } 00:37:40.617 ], 00:37:40.617 "read-only": true 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "name": "cache_device", 00:37:40.617 "type": "bdev", 00:37:40.617 "chunks": [ 00:37:40.617 { 00:37:40.617 "id": 0, 00:37:40.617 "state": "INACTIVE", 00:37:40.617 "utilization": 0.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 1, 00:37:40.617 "state": "CLOSED", 00:37:40.617 "utilization": 1.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 2, 00:37:40.617 "state": "CLOSED", 00:37:40.617 "utilization": 1.0 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 3, 00:37:40.617 "state": "OPEN", 00:37:40.617 "utilization": 0.001953125 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "id": 4, 00:37:40.617 "state": "OPEN", 00:37:40.617 "utilization": 0.0 00:37:40.617 } 00:37:40.617 ], 00:37:40.617 "read-only": true 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "name": "verbose_mode", 00:37:40.617 "value": true, 00:37:40.617 "unit": "", 00:37:40.617 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:40.617 }, 00:37:40.617 { 00:37:40.617 "name": "prep_upgrade_on_shutdown", 00:37:40.617 "value": false, 00:37:40.617 "unit": "", 00:37:40.617 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:40.617 } 00:37:40.617 ] 00:37:40.617 } 00:37:40.617 23:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:37:40.876 [2024-12-09 23:19:08.001703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:40.876 [2024-12-09 23:19:08.001994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:40.876 [2024-12-09 23:19:08.002156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:37:40.876 [2024-12-09 23:19:08.002246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:40.876 [2024-12-09 23:19:08.002334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:40.876 [2024-12-09 23:19:08.002400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:40.876 [2024-12-09 23:19:08.002531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:40.876 [2024-12-09 23:19:08.002617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:40.876 [2024-12-09 23:19:08.002682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:40.876 [2024-12-09 23:19:08.002719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:40.876 [2024-12-09 23:19:08.002800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:40.876 [2024-12-09 23:19:08.002835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:40.876 [2024-12-09 23:19:08.002993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.271 ms, result 0 00:37:40.876 true 00:37:40.876 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:37:40.876 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:40.876 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:41.134 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:37:41.134 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:37:41.134 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:41.398 [2024-12-09 23:19:08.473701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:41.399 [2024-12-09 23:19:08.473769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:41.399 [2024-12-09 23:19:08.473786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:41.399 [2024-12-09 23:19:08.473797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:41.399 [2024-12-09 23:19:08.473828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:41.399 [2024-12-09 23:19:08.473839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:41.399 [2024-12-09 23:19:08.473850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:41.399 [2024-12-09 23:19:08.473860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:41.399 [2024-12-09 23:19:08.473881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:41.399 [2024-12-09 23:19:08.473892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:41.399 [2024-12-09 23:19:08.473902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:41.399 [2024-12-09 23:19:08.473912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:41.399 [2024-12-09 23:19:08.473974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.267 ms, result 0 00:37:41.399 true 00:37:41.399 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:41.399 { 00:37:41.399 "name": "ftl", 00:37:41.399 "properties": [ 00:37:41.399 { 00:37:41.399 "name": "superblock_version", 00:37:41.399 "value": 5, 00:37:41.399 "read-only": true 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "name": "base_device", 00:37:41.399 "bands": [ 00:37:41.399 { 00:37:41.399 "id": 0, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 1, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 2, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 3, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 4, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 5, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 6, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 7, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 8, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 9, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 10, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 11, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 12, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 13, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 14, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 15, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 16, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 17, 00:37:41.399 "state": "FREE", 00:37:41.399 "validity": 0.0 00:37:41.399 } 00:37:41.399 ], 00:37:41.399 "read-only": true 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "name": "cache_device", 00:37:41.399 "type": "bdev", 00:37:41.399 "chunks": [ 00:37:41.399 { 00:37:41.399 "id": 0, 00:37:41.399 "state": "INACTIVE", 00:37:41.399 "utilization": 0.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 1, 00:37:41.399 "state": "CLOSED", 00:37:41.399 "utilization": 1.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 2, 00:37:41.399 "state": "CLOSED", 00:37:41.399 "utilization": 1.0 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 3, 00:37:41.399 "state": "OPEN", 00:37:41.399 "utilization": 0.001953125 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "id": 4, 00:37:41.399 "state": "OPEN", 00:37:41.399 "utilization": 0.0 00:37:41.399 } 00:37:41.399 ], 00:37:41.399 "read-only": true 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "name": "verbose_mode", 00:37:41.399 "value": true, 00:37:41.399 "unit": "", 00:37:41.399 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:41.399 }, 00:37:41.399 { 00:37:41.399 "name": "prep_upgrade_on_shutdown", 00:37:41.399 "value": true, 00:37:41.399 "unit": "", 00:37:41.399 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:41.399 } 00:37:41.399 ] 00:37:41.399 } 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83800 ]] 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83800 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83800 ']' 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83800 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83800 00:37:41.699 killing process with pid 83800 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83800' 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83800 00:37:41.699 23:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83800 00:37:42.645 [2024-12-09 23:19:09.930210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:37:42.645 [2024-12-09 23:19:09.949999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:42.645 [2024-12-09 23:19:09.950074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:37:42.645 [2024-12-09 23:19:09.950092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:42.645 [2024-12-09 23:19:09.950120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:42.645 [2024-12-09 23:19:09.950147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:37:42.645 [2024-12-09 23:19:09.954806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:42.645 [2024-12-09 23:19:09.954845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:37:42.645 [2024-12-09 23:19:09.954859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.649 ms 00:37:42.645 [2024-12-09 23:19:09.954878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.770 [2024-12-09 23:19:17.176802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.770 [2024-12-09 23:19:17.176894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:37:50.770 [2024-12-09 23:19:17.176913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7233.611 ms 00:37:50.770 [2024-12-09 23:19:17.176946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.770 [2024-12-09 23:19:17.178299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.770 [2024-12-09 23:19:17.178336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:37:50.770 [2024-12-09 23:19:17.178349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.334 ms 00:37:50.771 [2024-12-09 23:19:17.178360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.179289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.179314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:37:50.771 [2024-12-09 23:19:17.179328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.895 ms 00:37:50.771 [2024-12-09 23:19:17.179339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.195311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.195383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:37:50.771 [2024-12-09 23:19:17.195400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.944 ms 00:37:50.771 [2024-12-09 23:19:17.195411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.205332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.205423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:37:50.771 [2024-12-09 23:19:17.205441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.871 ms 00:37:50.771 [2024-12-09 23:19:17.205476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.205585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.205599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:37:50.771 [2024-12-09 23:19:17.205622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:37:50.771 [2024-12-09 23:19:17.205633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.220969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.221031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:37:50.771 [2024-12-09 23:19:17.221048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.336 ms 00:37:50.771 [2024-12-09 23:19:17.221060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.236897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.236951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:37:50.771 [2024-12-09 23:19:17.236967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.808 ms 00:37:50.771 [2024-12-09 23:19:17.236978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.253240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.253483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:37:50.771 [2024-12-09 23:19:17.253511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.164 ms 00:37:50.771 [2024-12-09 23:19:17.253523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.269185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.269434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:37:50.771 [2024-12-09 23:19:17.269480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.570 ms 00:37:50.771 [2024-12-09 23:19:17.269491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.269542] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:37:50.771 [2024-12-09 23:19:17.269579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:50.771 [2024-12-09 23:19:17.269593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:37:50.771 [2024-12-09 23:19:17.269605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:37:50.771 [2024-12-09 23:19:17.269617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:50.771 [2024-12-09 23:19:17.269782] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:37:50.771 [2024-12-09 23:19:17.269792] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d2578713-fbfc-46d1-a4ec-463d87c2c101 00:37:50.771 [2024-12-09 23:19:17.269804] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:37:50.771 [2024-12-09 23:19:17.269814] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:37:50.771 [2024-12-09 23:19:17.269823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:37:50.771 [2024-12-09 23:19:17.269834] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:37:50.771 [2024-12-09 23:19:17.269845] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:37:50.771 [2024-12-09 23:19:17.269861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:37:50.771 [2024-12-09 23:19:17.269872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:37:50.771 [2024-12-09 23:19:17.269881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:37:50.771 [2024-12-09 23:19:17.269891] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:37:50.771 [2024-12-09 23:19:17.269904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.269919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:37:50.771 [2024-12-09 23:19:17.269930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.365 ms 00:37:50.771 [2024-12-09 23:19:17.269941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.290611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.290684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:37:50.771 [2024-12-09 23:19:17.290700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.661 ms 00:37:50.771 [2024-12-09 23:19:17.290721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.291273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:50.771 [2024-12-09 23:19:17.291285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:37:50.771 [2024-12-09 23:19:17.291297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:37:50.771 [2024-12-09 23:19:17.291307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.359069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.359382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:50.771 [2024-12-09 23:19:17.359417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.359428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.359498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.359510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:50.771 [2024-12-09 23:19:17.359521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.359532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.359646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.359661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:50.771 [2024-12-09 23:19:17.359673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.359688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.359721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.359732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:50.771 [2024-12-09 23:19:17.359743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.359753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.491841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.491912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:50.771 [2024-12-09 23:19:17.491929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.491950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.603894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.603972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:50.771 [2024-12-09 23:19:17.603989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.604000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.604126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.604139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:50.771 [2024-12-09 23:19:17.604151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.604162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.604225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.604238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:50.771 [2024-12-09 23:19:17.604250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.771 [2024-12-09 23:19:17.604261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.771 [2024-12-09 23:19:17.604399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.771 [2024-12-09 23:19:17.604414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:50.771 [2024-12-09 23:19:17.604426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.772 [2024-12-09 23:19:17.604436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.772 [2024-12-09 23:19:17.604507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.772 [2024-12-09 23:19:17.604527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:37:50.772 [2024-12-09 23:19:17.604539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.772 [2024-12-09 23:19:17.604550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.772 [2024-12-09 23:19:17.604593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.772 [2024-12-09 23:19:17.604605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:50.772 [2024-12-09 23:19:17.604616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.772 [2024-12-09 23:19:17.604627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.772 [2024-12-09 23:19:17.604680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:37:50.772 [2024-12-09 23:19:17.604692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:50.772 [2024-12-09 23:19:17.604704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:37:50.772 [2024-12-09 23:19:17.604715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:50.772 [2024-12-09 23:19:17.604841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7667.244 ms, result 0 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:54.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84389 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84389 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84389 ']' 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:37:54.058 23:19:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:54.058 [2024-12-09 23:19:20.934628] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:37:54.058 [2024-12-09 23:19:20.934773] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84389 ] 00:37:54.058 [2024-12-09 23:19:21.118701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:54.058 [2024-12-09 23:19:21.242872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:54.990 [2024-12-09 23:19:22.231361] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:54.990 [2024-12-09 23:19:22.231476] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:37:55.249 [2024-12-09 23:19:22.378775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.249 [2024-12-09 23:19:22.378851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:37:55.249 [2024-12-09 23:19:22.378868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:55.249 [2024-12-09 23:19:22.378879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.249 [2024-12-09 23:19:22.378952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.249 [2024-12-09 23:19:22.378965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:37:55.249 [2024-12-09 23:19:22.378977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:37:55.250 [2024-12-09 23:19:22.378988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.379019] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:37:55.250 [2024-12-09 23:19:22.380177] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:37:55.250 [2024-12-09 23:19:22.380233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.380246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:37:55.250 [2024-12-09 23:19:22.380257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.228 ms 00:37:55.250 [2024-12-09 23:19:22.380267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.382517] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:37:55.250 [2024-12-09 23:19:22.403232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.403298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:37:55.250 [2024-12-09 23:19:22.403339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.748 ms 00:37:55.250 [2024-12-09 23:19:22.403350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.403470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.403485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:37:55.250 [2024-12-09 23:19:22.403497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:37:55.250 [2024-12-09 23:19:22.403508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.413338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.413391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:37:55.250 [2024-12-09 23:19:22.413406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.715 ms 00:37:55.250 [2024-12-09 23:19:22.413432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.413537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.413556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:37:55.250 [2024-12-09 23:19:22.413569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:37:55.250 [2024-12-09 23:19:22.413580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.413663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.413681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:37:55.250 [2024-12-09 23:19:22.413692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:37:55.250 [2024-12-09 23:19:22.413703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.413733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:37:55.250 [2024-12-09 23:19:22.418566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.418753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:37:55.250 [2024-12-09 23:19:22.418778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.849 ms 00:37:55.250 [2024-12-09 23:19:22.418796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.418844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.418856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:37:55.250 [2024-12-09 23:19:22.418867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:55.250 [2024-12-09 23:19:22.418878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.418928] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:37:55.250 [2024-12-09 23:19:22.418957] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:37:55.250 [2024-12-09 23:19:22.418993] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:37:55.250 [2024-12-09 23:19:22.419012] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:37:55.250 [2024-12-09 23:19:22.419103] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:37:55.250 [2024-12-09 23:19:22.419117] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:37:55.250 [2024-12-09 23:19:22.419130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:37:55.250 [2024-12-09 23:19:22.419144] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419156] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419171] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:37:55.250 [2024-12-09 23:19:22.419182] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:37:55.250 [2024-12-09 23:19:22.419192] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:37:55.250 [2024-12-09 23:19:22.419202] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:37:55.250 [2024-12-09 23:19:22.419213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.419224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:37:55.250 [2024-12-09 23:19:22.419234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:37:55.250 [2024-12-09 23:19:22.419245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.419318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.250 [2024-12-09 23:19:22.419329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:37:55.250 [2024-12-09 23:19:22.419343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:37:55.250 [2024-12-09 23:19:22.419353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.250 [2024-12-09 23:19:22.419474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:37:55.250 [2024-12-09 23:19:22.419489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:37:55.250 [2024-12-09 23:19:22.419500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:37:55.250 [2024-12-09 23:19:22.419531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:37:55.250 [2024-12-09 23:19:22.419550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:37:55.250 [2024-12-09 23:19:22.419561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:37:55.250 [2024-12-09 23:19:22.419570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:37:55.250 [2024-12-09 23:19:22.419592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:37:55.250 [2024-12-09 23:19:22.419601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:37:55.250 [2024-12-09 23:19:22.419621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:37:55.250 [2024-12-09 23:19:22.419630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:37:55.250 [2024-12-09 23:19:22.419650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:37:55.250 [2024-12-09 23:19:22.419659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:37:55.250 [2024-12-09 23:19:22.419678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:37:55.250 [2024-12-09 23:19:22.419687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:37:55.250 [2024-12-09 23:19:22.419718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:37:55.250 [2024-12-09 23:19:22.419728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:37:55.250 [2024-12-09 23:19:22.419747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:37:55.250 [2024-12-09 23:19:22.419756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:37:55.250 [2024-12-09 23:19:22.419780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:37:55.250 [2024-12-09 23:19:22.419789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:37:55.250 [2024-12-09 23:19:22.419808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:37:55.250 [2024-12-09 23:19:22.419818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:37:55.250 [2024-12-09 23:19:22.419836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:37:55.250 [2024-12-09 23:19:22.419863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:37:55.250 [2024-12-09 23:19:22.419891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:37:55.250 [2024-12-09 23:19:22.419900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419909] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:37:55.250 [2024-12-09 23:19:22.419920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:37:55.250 [2024-12-09 23:19:22.419929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:37:55.250 [2024-12-09 23:19:22.419940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:37:55.250 [2024-12-09 23:19:22.419955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:37:55.251 [2024-12-09 23:19:22.419964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:37:55.251 [2024-12-09 23:19:22.419974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:37:55.251 [2024-12-09 23:19:22.419983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:37:55.251 [2024-12-09 23:19:22.419992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:37:55.251 [2024-12-09 23:19:22.420001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:37:55.251 [2024-12-09 23:19:22.420012] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:37:55.251 [2024-12-09 23:19:22.420025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:37:55.251 [2024-12-09 23:19:22.420046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:37:55.251 [2024-12-09 23:19:22.420076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:37:55.251 [2024-12-09 23:19:22.420087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:37:55.251 [2024-12-09 23:19:22.420097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:37:55.251 [2024-12-09 23:19:22.420107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:37:55.251 [2024-12-09 23:19:22.420179] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:37:55.251 [2024-12-09 23:19:22.420190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:55.251 [2024-12-09 23:19:22.420213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:37:55.251 [2024-12-09 23:19:22.420223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:37:55.251 [2024-12-09 23:19:22.420234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:37:55.251 [2024-12-09 23:19:22.420246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:55.251 [2024-12-09 23:19:22.420256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:37:55.251 [2024-12-09 23:19:22.420267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:37:55.251 [2024-12-09 23:19:22.420277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:55.251 [2024-12-09 23:19:22.420327] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:37:55.251 [2024-12-09 23:19:22.420340] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:37:58.539 [2024-12-09 23:19:25.724786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.724868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:37:58.539 [2024-12-09 23:19:25.724887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3309.821 ms 00:37:58.539 [2024-12-09 23:19:25.724914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.539 [2024-12-09 23:19:25.765574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.765637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:37:58.539 [2024-12-09 23:19:25.765655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.354 ms 00:37:58.539 [2024-12-09 23:19:25.765666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.539 [2024-12-09 23:19:25.765802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.765822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:37:58.539 [2024-12-09 23:19:25.765833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:37:58.539 [2024-12-09 23:19:25.765845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.539 [2024-12-09 23:19:25.817981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.818276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:37:58.539 [2024-12-09 23:19:25.818313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.170 ms 00:37:58.539 [2024-12-09 23:19:25.818325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.539 [2024-12-09 23:19:25.818402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.818414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:37:58.539 [2024-12-09 23:19:25.818425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:37:58.539 [2024-12-09 23:19:25.818435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.539 [2024-12-09 23:19:25.819335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.819359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:37:58.539 [2024-12-09 23:19:25.819370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.762 ms 00:37:58.539 [2024-12-09 23:19:25.819381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.539 [2024-12-09 23:19:25.819435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.819447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:37:58.539 [2024-12-09 23:19:25.819469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:37:58.539 [2024-12-09 23:19:25.819480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.539 [2024-12-09 23:19:25.845260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.539 [2024-12-09 23:19:25.845588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:37:58.539 [2024-12-09 23:19:25.845617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.794 ms 00:37:58.539 [2024-12-09 23:19:25.845629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:25.875176] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:37:58.799 [2024-12-09 23:19:25.875248] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:37:58.799 [2024-12-09 23:19:25.875268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:25.875281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:37:58.799 [2024-12-09 23:19:25.875296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.502 ms 00:37:58.799 [2024-12-09 23:19:25.875307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:25.896760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:25.896828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:37:58.799 [2024-12-09 23:19:25.896846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.410 ms 00:37:58.799 [2024-12-09 23:19:25.896874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:25.917219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:25.917492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:37:58.799 [2024-12-09 23:19:25.917519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.290 ms 00:37:58.799 [2024-12-09 23:19:25.917531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:25.937482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:25.937549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:37:58.799 [2024-12-09 23:19:25.937565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.908 ms 00:37:58.799 [2024-12-09 23:19:25.937576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:25.938514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:25.938544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:37:58.799 [2024-12-09 23:19:25.938557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:37:58.799 [2024-12-09 23:19:25.938567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.031656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.031754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:37:58.799 [2024-12-09 23:19:26.031772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.202 ms 00:37:58.799 [2024-12-09 23:19:26.031784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.045585] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:37:58.799 [2024-12-09 23:19:26.047065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.047251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:37:58.799 [2024-12-09 23:19:26.047278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.211 ms 00:37:58.799 [2024-12-09 23:19:26.047291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.047442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.047472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:37:58.799 [2024-12-09 23:19:26.047484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:37:58.799 [2024-12-09 23:19:26.047495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.047566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.047580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:37:58.799 [2024-12-09 23:19:26.047592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:37:58.799 [2024-12-09 23:19:26.047602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.047629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.047641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:37:58.799 [2024-12-09 23:19:26.047655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:37:58.799 [2024-12-09 23:19:26.047666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.047701] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:37:58.799 [2024-12-09 23:19:26.047714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.047724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:37:58.799 [2024-12-09 23:19:26.047735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:37:58.799 [2024-12-09 23:19:26.047745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.087352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.087436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:37:58.799 [2024-12-09 23:19:26.087468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.642 ms 00:37:58.799 [2024-12-09 23:19:26.087480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.087591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:58.799 [2024-12-09 23:19:26.087605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:37:58.799 [2024-12-09 23:19:26.087616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:37:58.799 [2024-12-09 23:19:26.087627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:58.799 [2024-12-09 23:19:26.088951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3715.699 ms, result 0 00:37:58.799 [2024-12-09 23:19:26.103821] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.799 [2024-12-09 23:19:26.119821] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:37:58.799 [2024-12-09 23:19:26.129661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:59.059 23:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.059 23:19:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:37:59.059 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:37:59.059 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:37:59.059 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:37:59.059 [2024-12-09 23:19:26.385224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.059 [2024-12-09 23:19:26.385298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:37:59.059 [2024-12-09 23:19:26.385323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:37:59.059 [2024-12-09 23:19:26.385334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.059 [2024-12-09 23:19:26.385365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.059 [2024-12-09 23:19:26.385377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:37:59.059 [2024-12-09 23:19:26.385388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:37:59.059 [2024-12-09 23:19:26.385399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.059 [2024-12-09 23:19:26.385420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:37:59.059 [2024-12-09 23:19:26.385431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:37:59.059 [2024-12-09 23:19:26.385442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:37:59.059 [2024-12-09 23:19:26.385472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:37:59.059 [2024-12-09 23:19:26.385542] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.308 ms, result 0 00:37:59.059 true 00:37:59.317 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:59.317 { 00:37:59.317 "name": "ftl", 00:37:59.317 "properties": [ 00:37:59.317 { 00:37:59.317 "name": "superblock_version", 00:37:59.317 "value": 5, 00:37:59.317 "read-only": true 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "name": "base_device", 00:37:59.317 "bands": [ 00:37:59.317 { 00:37:59.317 "id": 0, 00:37:59.317 "state": "CLOSED", 00:37:59.317 "validity": 1.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 1, 00:37:59.317 "state": "CLOSED", 00:37:59.317 "validity": 1.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 2, 00:37:59.317 "state": "CLOSED", 00:37:59.317 "validity": 0.007843137254901933 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 3, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 4, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 5, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 6, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 7, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 8, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 9, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 10, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 11, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 12, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 13, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 14, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 15, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 16, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 17, 00:37:59.317 "state": "FREE", 00:37:59.317 "validity": 0.0 00:37:59.317 } 00:37:59.317 ], 00:37:59.317 "read-only": true 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "name": "cache_device", 00:37:59.317 "type": "bdev", 00:37:59.317 "chunks": [ 00:37:59.317 { 00:37:59.317 "id": 0, 00:37:59.317 "state": "INACTIVE", 00:37:59.317 "utilization": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 1, 00:37:59.317 "state": "OPEN", 00:37:59.317 "utilization": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 2, 00:37:59.317 "state": "OPEN", 00:37:59.317 "utilization": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 3, 00:37:59.317 "state": "FREE", 00:37:59.317 "utilization": 0.0 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "id": 4, 00:37:59.317 "state": "FREE", 00:37:59.317 "utilization": 0.0 00:37:59.317 } 00:37:59.317 ], 00:37:59.317 "read-only": true 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "name": "verbose_mode", 00:37:59.317 "value": true, 00:37:59.317 "unit": "", 00:37:59.317 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:37:59.317 }, 00:37:59.317 { 00:37:59.317 "name": "prep_upgrade_on_shutdown", 00:37:59.317 "value": false, 00:37:59.317 "unit": "", 00:37:59.317 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:37:59.317 } 00:37:59.317 ] 00:37:59.317 } 00:37:59.317 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:37:59.317 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:59.317 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:37:59.575 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:37:59.575 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:37:59.575 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:37:59.575 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:37:59.575 23:19:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:37:59.834 Validate MD5 checksum, iteration 1 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:37:59.834 23:19:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:00.093 [2024-12-09 23:19:27.189871] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:38:00.093 [2024-12-09 23:19:27.190253] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84470 ] 00:38:00.093 [2024-12-09 23:19:27.374913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.353 [2024-12-09 23:19:27.512724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.270  [2024-12-09T23:19:29.865Z] Copying: 701/1024 [MB] (701 MBps) [2024-12-09T23:19:31.772Z] Copying: 1024/1024 [MB] (average 693 MBps) 00:38:04.436 00:38:04.436 23:19:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:38:04.436 23:19:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:05.815 Validate MD5 checksum, iteration 2 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=397b9e88e6cdbc22dacf52f3db1916bb 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 397b9e88e6cdbc22dacf52f3db1916bb != \3\9\7\b\9\e\8\8\e\6\c\d\b\c\2\2\d\a\c\f\5\2\f\3\d\b\1\9\1\6\b\b ]] 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:05.815 23:19:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:06.074 [2024-12-09 23:19:33.158976] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:38:06.074 [2024-12-09 23:19:33.159391] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84538 ] 00:38:06.074 [2024-12-09 23:19:33.343520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:06.332 [2024-12-09 23:19:33.479463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:08.236  [2024-12-09T23:19:35.830Z] Copying: 701/1024 [MB] (701 MBps) [2024-12-09T23:19:39.150Z] Copying: 1024/1024 [MB] (average 696 MBps) 00:38:11.814 00:38:11.814 23:19:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:38:11.814 23:19:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dbfc562879c84584b9d06025d0ca5249 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dbfc562879c84584b9d06025d0ca5249 != \d\b\f\c\5\6\2\8\7\9\c\8\4\5\8\4\b\9\d\0\6\0\2\5\d\0\c\a\5\2\4\9 ]] 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84389 ]] 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84389 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84617 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84617 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84617 ']' 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:13.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:13.737 23:19:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:13.737 [2024-12-09 23:19:40.859434] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:38:13.737 [2024-12-09 23:19:40.859861] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84617 ] 00:38:13.738 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84389 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:38:13.738 [2024-12-09 23:19:41.045074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.001 [2024-12-09 23:19:41.167821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.052 [2024-12-09 23:19:42.168091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:15.052 [2024-12-09 23:19:42.168190] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:38:15.052 [2024-12-09 23:19:42.316293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.316372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:38:15.052 [2024-12-09 23:19:42.316389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:15.052 [2024-12-09 23:19:42.316400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.316497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.316511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:15.052 [2024-12-09 23:19:42.316523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:38:15.052 [2024-12-09 23:19:42.316534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.316567] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:38:15.052 [2024-12-09 23:19:42.317714] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:38:15.052 [2024-12-09 23:19:42.317754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.317765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:15.052 [2024-12-09 23:19:42.317777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.201 ms 00:38:15.052 [2024-12-09 23:19:42.317788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.318198] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:38:15.052 [2024-12-09 23:19:42.342582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.342653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:38:15.052 [2024-12-09 23:19:42.342686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.421 ms 00:38:15.052 [2024-12-09 23:19:42.342709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.357307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.357390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:38:15.052 [2024-12-09 23:19:42.357405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:38:15.052 [2024-12-09 23:19:42.357416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.358055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.358080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:15.052 [2024-12-09 23:19:42.358092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.476 ms 00:38:15.052 [2024-12-09 23:19:42.358107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.358202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.358218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:15.052 [2024-12-09 23:19:42.358230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:38:15.052 [2024-12-09 23:19:42.358240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.358278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.358290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:38:15.052 [2024-12-09 23:19:42.358301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:38:15.052 [2024-12-09 23:19:42.358310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.358339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:38:15.052 [2024-12-09 23:19:42.362913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.362952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:15.052 [2024-12-09 23:19:42.362965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.588 ms 00:38:15.052 [2024-12-09 23:19:42.362979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.363017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.363029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:38:15.052 [2024-12-09 23:19:42.363040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:15.052 [2024-12-09 23:19:42.363051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.363102] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:38:15.052 [2024-12-09 23:19:42.363136] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:38:15.052 [2024-12-09 23:19:42.363183] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:38:15.052 [2024-12-09 23:19:42.363210] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:38:15.052 [2024-12-09 23:19:42.363298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:38:15.052 [2024-12-09 23:19:42.363312] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:38:15.052 [2024-12-09 23:19:42.363326] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:38:15.052 [2024-12-09 23:19:42.363339] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:38:15.052 [2024-12-09 23:19:42.363351] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:38:15.052 [2024-12-09 23:19:42.363362] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:38:15.052 [2024-12-09 23:19:42.363373] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:38:15.052 [2024-12-09 23:19:42.363384] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:38:15.052 [2024-12-09 23:19:42.363397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:38:15.052 [2024-12-09 23:19:42.363409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.363419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:38:15.052 [2024-12-09 23:19:42.363430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.311 ms 00:38:15.052 [2024-12-09 23:19:42.363440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.363547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.052 [2024-12-09 23:19:42.363560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:38:15.052 [2024-12-09 23:19:42.363570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:38:15.052 [2024-12-09 23:19:42.363580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.052 [2024-12-09 23:19:42.363685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:38:15.052 [2024-12-09 23:19:42.363702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:38:15.052 [2024-12-09 23:19:42.363712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:15.052 [2024-12-09 23:19:42.363723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.052 [2024-12-09 23:19:42.363733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:38:15.052 [2024-12-09 23:19:42.363742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:38:15.052 [2024-12-09 23:19:42.363751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:38:15.052 [2024-12-09 23:19:42.363760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:38:15.052 [2024-12-09 23:19:42.363772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:38:15.052 [2024-12-09 23:19:42.363780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.052 [2024-12-09 23:19:42.363790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:38:15.052 [2024-12-09 23:19:42.363799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:38:15.052 [2024-12-09 23:19:42.363808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.052 [2024-12-09 23:19:42.363817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:38:15.052 [2024-12-09 23:19:42.363826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:38:15.052 [2024-12-09 23:19:42.363835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.052 [2024-12-09 23:19:42.363844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:38:15.052 [2024-12-09 23:19:42.363853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:38:15.052 [2024-12-09 23:19:42.363862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.052 [2024-12-09 23:19:42.363870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:38:15.052 [2024-12-09 23:19:42.363879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:38:15.052 [2024-12-09 23:19:42.363899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:15.052 [2024-12-09 23:19:42.363908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:38:15.052 [2024-12-09 23:19:42.363917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:38:15.052 [2024-12-09 23:19:42.363927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:15.052 [2024-12-09 23:19:42.363936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:38:15.052 [2024-12-09 23:19:42.363946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:38:15.052 [2024-12-09 23:19:42.363955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:15.052 [2024-12-09 23:19:42.363965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:38:15.053 [2024-12-09 23:19:42.363974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:38:15.053 [2024-12-09 23:19:42.363983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:15.053 [2024-12-09 23:19:42.363992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:38:15.053 [2024-12-09 23:19:42.364002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:38:15.053 [2024-12-09 23:19:42.364011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.053 [2024-12-09 23:19:42.364021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:38:15.053 [2024-12-09 23:19:42.364030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:38:15.053 [2024-12-09 23:19:42.364039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.053 [2024-12-09 23:19:42.364048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:38:15.053 [2024-12-09 23:19:42.364057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:38:15.053 [2024-12-09 23:19:42.364066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.053 [2024-12-09 23:19:42.364075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:38:15.053 [2024-12-09 23:19:42.364084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:38:15.053 [2024-12-09 23:19:42.364093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.053 [2024-12-09 23:19:42.364102] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:38:15.053 [2024-12-09 23:19:42.364113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:38:15.053 [2024-12-09 23:19:42.364122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:15.053 [2024-12-09 23:19:42.364132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:15.053 [2024-12-09 23:19:42.364142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:38:15.053 [2024-12-09 23:19:42.364151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:38:15.053 [2024-12-09 23:19:42.364160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:38:15.053 [2024-12-09 23:19:42.364170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:38:15.053 [2024-12-09 23:19:42.364179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:38:15.053 [2024-12-09 23:19:42.364188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:38:15.053 [2024-12-09 23:19:42.364199] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:38:15.053 [2024-12-09 23:19:42.364212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:38:15.053 [2024-12-09 23:19:42.364240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:38:15.053 [2024-12-09 23:19:42.364274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:38:15.053 [2024-12-09 23:19:42.364285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:38:15.053 [2024-12-09 23:19:42.364295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:38:15.053 [2024-12-09 23:19:42.364306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:38:15.053 [2024-12-09 23:19:42.364377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:38:15.053 [2024-12-09 23:19:42.364393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:15.053 [2024-12-09 23:19:42.364415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:38:15.053 [2024-12-09 23:19:42.364426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:38:15.053 [2024-12-09 23:19:42.364437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:38:15.053 [2024-12-09 23:19:42.364448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.053 [2024-12-09 23:19:42.364459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:38:15.053 [2024-12-09 23:19:42.364468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.817 ms 00:38:15.053 [2024-12-09 23:19:42.364815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.312 [2024-12-09 23:19:42.404738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.312 [2024-12-09 23:19:42.405048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:15.312 [2024-12-09 23:19:42.405181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.877 ms 00:38:15.312 [2024-12-09 23:19:42.405222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.312 [2024-12-09 23:19:42.405313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.312 [2024-12-09 23:19:42.405352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:38:15.312 [2024-12-09 23:19:42.405387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:38:15.312 [2024-12-09 23:19:42.405419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.312 [2024-12-09 23:19:42.458361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.312 [2024-12-09 23:19:42.458715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:15.312 [2024-12-09 23:19:42.458913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.821 ms 00:38:15.312 [2024-12-09 23:19:42.458955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.312 [2024-12-09 23:19:42.459054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.312 [2024-12-09 23:19:42.459216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:15.312 [2024-12-09 23:19:42.459293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:38:15.312 [2024-12-09 23:19:42.459334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.312 [2024-12-09 23:19:42.459553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.312 [2024-12-09 23:19:42.459729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:15.312 [2024-12-09 23:19:42.459749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:38:15.312 [2024-12-09 23:19:42.459760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.312 [2024-12-09 23:19:42.459817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.312 [2024-12-09 23:19:42.459829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:15.312 [2024-12-09 23:19:42.459840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:38:15.313 [2024-12-09 23:19:42.459859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.313 [2024-12-09 23:19:42.486377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.313 [2024-12-09 23:19:42.486669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:15.313 [2024-12-09 23:19:42.486706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.534 ms 00:38:15.313 [2024-12-09 23:19:42.486719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.313 [2024-12-09 23:19:42.486896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.313 [2024-12-09 23:19:42.486912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:38:15.313 [2024-12-09 23:19:42.486924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:38:15.313 [2024-12-09 23:19:42.486934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.313 [2024-12-09 23:19:42.525083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.313 [2024-12-09 23:19:42.525315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:38:15.313 [2024-12-09 23:19:42.525342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.180 ms 00:38:15.313 [2024-12-09 23:19:42.525355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.313 [2024-12-09 23:19:42.541905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.313 [2024-12-09 23:19:42.541982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:38:15.313 [2024-12-09 23:19:42.541999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:38:15.313 [2024-12-09 23:19:42.542009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.313 [2024-12-09 23:19:42.635316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.313 [2024-12-09 23:19:42.635405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:38:15.313 [2024-12-09 23:19:42.635423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.343 ms 00:38:15.313 [2024-12-09 23:19:42.635434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.313 [2024-12-09 23:19:42.635688] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:38:15.313 [2024-12-09 23:19:42.635818] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:38:15.313 [2024-12-09 23:19:42.635942] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:38:15.313 [2024-12-09 23:19:42.636067] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:38:15.313 [2024-12-09 23:19:42.636086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.313 [2024-12-09 23:19:42.636097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:38:15.313 [2024-12-09 23:19:42.636109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:38:15.313 [2024-12-09 23:19:42.636120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.313 [2024-12-09 23:19:42.636226] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:38:15.313 [2024-12-09 23:19:42.636248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.313 [2024-12-09 23:19:42.636259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:38:15.313 [2024-12-09 23:19:42.636270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:38:15.313 [2024-12-09 23:19:42.636280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.572 [2024-12-09 23:19:42.662841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.572 [2024-12-09 23:19:42.663117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:38:15.572 [2024-12-09 23:19:42.663147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.568 ms 00:38:15.572 [2024-12-09 23:19:42.663158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.572 [2024-12-09 23:19:42.678658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.572 [2024-12-09 23:19:42.678726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:38:15.572 [2024-12-09 23:19:42.678741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:38:15.572 [2024-12-09 23:19:42.678752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:15.572 [2024-12-09 23:19:42.678918] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:38:15.572 [2024-12-09 23:19:42.679112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:15.572 [2024-12-09 23:19:42.679128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:38:15.572 [2024-12-09 23:19:42.679140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:38:15.572 [2024-12-09 23:19:42.679151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.139 [2024-12-09 23:19:43.249967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.139 [2024-12-09 23:19:43.250207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:38:16.139 [2024-12-09 23:19:43.250240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 570.305 ms 00:38:16.139 [2024-12-09 23:19:43.250252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.139 [2024-12-09 23:19:43.256057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.139 [2024-12-09 23:19:43.256121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:38:16.139 [2024-12-09 23:19:43.256138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.223 ms 00:38:16.139 [2024-12-09 23:19:43.256154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.139 [2024-12-09 23:19:43.256641] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:38:16.139 [2024-12-09 23:19:43.256722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.139 [2024-12-09 23:19:43.256735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:38:16.139 [2024-12-09 23:19:43.256748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:38:16.139 [2024-12-09 23:19:43.256758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.139 [2024-12-09 23:19:43.256795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.139 [2024-12-09 23:19:43.256808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:38:16.139 [2024-12-09 23:19:43.256825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:16.139 [2024-12-09 23:19:43.256836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.139 [2024-12-09 23:19:43.256875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 578.900 ms, result 0 00:38:16.139 [2024-12-09 23:19:43.256921] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:38:16.139 [2024-12-09 23:19:43.256997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.139 [2024-12-09 23:19:43.257008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:38:16.139 [2024-12-09 23:19:43.257019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:38:16.139 [2024-12-09 23:19:43.257028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.826224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.826306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:38:16.707 [2024-12-09 23:19:43.826346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 568.783 ms 00:38:16.707 [2024-12-09 23:19:43.826358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.832148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.832374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:38:16.707 [2024-12-09 23:19:43.832399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.203 ms 00:38:16.707 [2024-12-09 23:19:43.832409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.832940] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:38:16.707 [2024-12-09 23:19:43.832971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.832982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:38:16.707 [2024-12-09 23:19:43.832994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.498 ms 00:38:16.707 [2024-12-09 23:19:43.833004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.833036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.833048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:38:16.707 [2024-12-09 23:19:43.833059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:16.707 [2024-12-09 23:19:43.833069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.833110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 577.121 ms, result 0 00:38:16.707 [2024-12-09 23:19:43.833166] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:38:16.707 [2024-12-09 23:19:43.833180] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:38:16.707 [2024-12-09 23:19:43.833194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.833205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:38:16.707 [2024-12-09 23:19:43.833216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1156.179 ms 00:38:16.707 [2024-12-09 23:19:43.833226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.833265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.833278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:38:16.707 [2024-12-09 23:19:43.833288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:38:16.707 [2024-12-09 23:19:43.833298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.848872] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:38:16.707 [2024-12-09 23:19:43.849067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.849084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:38:16.707 [2024-12-09 23:19:43.849098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.775 ms 00:38:16.707 [2024-12-09 23:19:43.849109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.849773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.849807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:38:16.707 [2024-12-09 23:19:43.849819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:38:16.707 [2024-12-09 23:19:43.849829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.851794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.851826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:38:16.707 [2024-12-09 23:19:43.851838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.944 ms 00:38:16.707 [2024-12-09 23:19:43.851849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.851905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.851918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:38:16.707 [2024-12-09 23:19:43.851935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:38:16.707 [2024-12-09 23:19:43.851946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.852058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.852072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:38:16.707 [2024-12-09 23:19:43.852084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:38:16.707 [2024-12-09 23:19:43.852094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.707 [2024-12-09 23:19:43.852118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.707 [2024-12-09 23:19:43.852130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:38:16.707 [2024-12-09 23:19:43.852141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:38:16.708 [2024-12-09 23:19:43.852155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.708 [2024-12-09 23:19:43.852194] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:38:16.708 [2024-12-09 23:19:43.852208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.708 [2024-12-09 23:19:43.852218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:38:16.708 [2024-12-09 23:19:43.852229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:38:16.708 [2024-12-09 23:19:43.852239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.708 [2024-12-09 23:19:43.852289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:16.708 [2024-12-09 23:19:43.852301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:38:16.708 [2024-12-09 23:19:43.852311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:38:16.708 [2024-12-09 23:19:43.852326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:16.708 [2024-12-09 23:19:43.853520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1539.230 ms, result 0 00:38:16.708 [2024-12-09 23:19:43.865904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:16.708 [2024-12-09 23:19:43.881921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:38:16.708 [2024-12-09 23:19:43.892786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:16.708 Validate MD5 checksum, iteration 1 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:16.708 23:19:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:16.708 [2024-12-09 23:19:44.038584] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:38:16.708 [2024-12-09 23:19:44.039225] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84657 ] 00:38:16.967 [2024-12-09 23:19:44.223253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.226 [2024-12-09 23:19:44.368211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:19.129  [2024-12-09T23:19:46.723Z] Copying: 702/1024 [MB] (702 MBps) [2024-12-09T23:19:50.009Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:38:22.673 00:38:22.673 23:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:38:22.673 23:19:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:24.053 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:24.053 Validate MD5 checksum, iteration 2 00:38:24.053 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=397b9e88e6cdbc22dacf52f3db1916bb 00:38:24.053 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 397b9e88e6cdbc22dacf52f3db1916bb != \3\9\7\b\9\e\8\8\e\6\c\d\b\c\2\2\d\a\c\f\5\2\f\3\d\b\1\9\1\6\b\b ]] 00:38:24.053 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:24.053 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:24.053 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:38:24.053 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:24.054 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:24.054 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:24.054 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:24.054 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:24.054 23:19:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:24.054 [2024-12-09 23:19:51.151932] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:38:24.054 [2024-12-09 23:19:51.152074] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84731 ] 00:38:24.054 [2024-12-09 23:19:51.336969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.312 [2024-12-09 23:19:51.484077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:26.212  [2024-12-09T23:19:53.827Z] Copying: 695/1024 [MB] (695 MBps) [2024-12-09T23:19:55.212Z] Copying: 1024/1024 [MB] (average 696 MBps) 00:38:27.876 00:38:27.876 23:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:38:27.876 23:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dbfc562879c84584b9d06025d0ca5249 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dbfc562879c84584b9d06025d0ca5249 != \d\b\f\c\5\6\2\8\7\9\c\8\4\5\8\4\b\9\d\0\6\0\2\5\d\0\c\a\5\2\4\9 ]] 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84617 ]] 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84617 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84617 ']' 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84617 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.806 23:19:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84617 00:38:29.806 killing process with pid 84617 00:38:29.806 23:19:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:29.806 23:19:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:29.806 23:19:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84617' 00:38:29.806 23:19:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84617 00:38:29.806 23:19:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84617 00:38:31.237 [2024-12-09 23:19:58.161722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:38:31.237 [2024-12-09 23:19:58.180990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.237 [2024-12-09 23:19:58.181061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:38:31.237 [2024-12-09 23:19:58.181077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:31.237 [2024-12-09 23:19:58.181088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.237 [2024-12-09 23:19:58.181131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:38:31.237 [2024-12-09 23:19:58.185238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.237 [2024-12-09 23:19:58.185288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:38:31.237 [2024-12-09 23:19:58.185302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.096 ms 00:38:31.237 [2024-12-09 23:19:58.185312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.237 [2024-12-09 23:19:58.185573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.237 [2024-12-09 23:19:58.185589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:38:31.237 [2024-12-09 23:19:58.185600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.216 ms 00:38:31.237 [2024-12-09 23:19:58.185611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.237 [2024-12-09 23:19:58.186791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.237 [2024-12-09 23:19:58.186826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:38:31.237 [2024-12-09 23:19:58.186846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.163 ms 00:38:31.237 [2024-12-09 23:19:58.186857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.187815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.187846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:38:31.238 [2024-12-09 23:19:58.187858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:38:31.238 [2024-12-09 23:19:58.187869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.203624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.203711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:38:31.238 [2024-12-09 23:19:58.203739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.723 ms 00:38:31.238 [2024-12-09 23:19:58.203750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.211939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.212022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:38:31.238 [2024-12-09 23:19:58.212039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.153 ms 00:38:31.238 [2024-12-09 23:19:58.212050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.212152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.212166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:38:31.238 [2024-12-09 23:19:58.212189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:38:31.238 [2024-12-09 23:19:58.212200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.227508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.227594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:38:31.238 [2024-12-09 23:19:58.227610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.310 ms 00:38:31.238 [2024-12-09 23:19:58.227621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.243152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.243224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:38:31.238 [2024-12-09 23:19:58.243239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.508 ms 00:38:31.238 [2024-12-09 23:19:58.243250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.258150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.258220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:38:31.238 [2024-12-09 23:19:58.258236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.876 ms 00:38:31.238 [2024-12-09 23:19:58.258263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.273814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.273885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:38:31.238 [2024-12-09 23:19:58.273901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.475 ms 00:38:31.238 [2024-12-09 23:19:58.273912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.273959] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:38:31.238 [2024-12-09 23:19:58.273982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:31.238 [2024-12-09 23:19:58.273996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:38:31.238 [2024-12-09 23:19:58.274008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:38:31.238 [2024-12-09 23:19:58.274020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:31.238 [2024-12-09 23:19:58.274186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:38:31.238 [2024-12-09 23:19:58.274196] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d2578713-fbfc-46d1-a4ec-463d87c2c101 00:38:31.238 [2024-12-09 23:19:58.274208] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:38:31.238 [2024-12-09 23:19:58.274217] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:38:31.238 [2024-12-09 23:19:58.274227] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:38:31.238 [2024-12-09 23:19:58.274238] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:38:31.238 [2024-12-09 23:19:58.274247] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:38:31.238 [2024-12-09 23:19:58.274270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:38:31.238 [2024-12-09 23:19:58.274281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:38:31.238 [2024-12-09 23:19:58.274290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:38:31.238 [2024-12-09 23:19:58.274305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:38:31.238 [2024-12-09 23:19:58.274316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.274327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:38:31.238 [2024-12-09 23:19:58.274338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:38:31.238 [2024-12-09 23:19:58.274348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.296105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.296170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:38:31.238 [2024-12-09 23:19:58.296186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.749 ms 00:38:31.238 [2024-12-09 23:19:58.296207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.296798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.238 [2024-12-09 23:19:58.296817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:38:31.238 [2024-12-09 23:19:58.296828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:38:31.238 [2024-12-09 23:19:58.296839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.367793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.238 [2024-12-09 23:19:58.367872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:31.238 [2024-12-09 23:19:58.367895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.238 [2024-12-09 23:19:58.367906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.367965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.238 [2024-12-09 23:19:58.367977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:31.238 [2024-12-09 23:19:58.367987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.238 [2024-12-09 23:19:58.367998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.368137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.238 [2024-12-09 23:19:58.368152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:31.238 [2024-12-09 23:19:58.368163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.238 [2024-12-09 23:19:58.368174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.368200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.238 [2024-12-09 23:19:58.368212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:31.238 [2024-12-09 23:19:58.368223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.238 [2024-12-09 23:19:58.368233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.238 [2024-12-09 23:19:58.497571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.238 [2024-12-09 23:19:58.497650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:31.238 [2024-12-09 23:19:58.497667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.238 [2024-12-09 23:19:58.497705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.599488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.497 [2024-12-09 23:19:58.599582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:31.497 [2024-12-09 23:19:58.599598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.497 [2024-12-09 23:19:58.599610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.599734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.497 [2024-12-09 23:19:58.599747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:31.497 [2024-12-09 23:19:58.599759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.497 [2024-12-09 23:19:58.599770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.599844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.497 [2024-12-09 23:19:58.599870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:31.497 [2024-12-09 23:19:58.599881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.497 [2024-12-09 23:19:58.599896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.600037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.497 [2024-12-09 23:19:58.600051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:31.497 [2024-12-09 23:19:58.600061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.497 [2024-12-09 23:19:58.600072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.600112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.497 [2024-12-09 23:19:58.600129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:38:31.497 [2024-12-09 23:19:58.600139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.497 [2024-12-09 23:19:58.600150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.600191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.497 [2024-12-09 23:19:58.600203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:31.497 [2024-12-09 23:19:58.600214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.497 [2024-12-09 23:19:58.600224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.600272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:38:31.497 [2024-12-09 23:19:58.600299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:31.497 [2024-12-09 23:19:58.600309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:38:31.497 [2024-12-09 23:19:58.600319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.497 [2024-12-09 23:19:58.600443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 420.101 ms, result 0 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:32.873 Remove shared memory files 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84389 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:38:32.873 00:38:32.873 real 1m30.511s 00:38:32.873 user 2m4.620s 00:38:32.873 sys 0m23.102s 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:32.873 23:19:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:32.873 ************************************ 00:38:32.873 END TEST ftl_upgrade_shutdown 00:38:32.873 ************************************ 00:38:32.873 23:20:00 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:38:32.873 23:20:00 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:38:32.873 23:20:00 ftl -- ftl/ftl.sh@14 -- # killprocess 77179 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@954 -- # '[' -z 77179 ']' 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@958 -- # kill -0 77179 00:38:32.873 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77179) - No such process 00:38:32.873 Process with pid 77179 is not found 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77179 is not found' 00:38:32.873 23:20:00 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:38:32.873 23:20:00 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84861 00:38:32.873 23:20:00 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:32.873 23:20:00 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84861 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@835 -- # '[' -z 84861 ']' 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:32.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:32.873 23:20:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:32.873 [2024-12-09 23:20:00.127297] Starting SPDK v25.01-pre git sha1 f80471632 / DPDK 24.03.0 initialization... 00:38:32.873 [2024-12-09 23:20:00.127467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84861 ] 00:38:33.132 [2024-12-09 23:20:00.313149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.132 [2024-12-09 23:20:00.445568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.066 23:20:01 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.066 23:20:01 ftl -- common/autotest_common.sh@868 -- # return 0 00:38:34.066 23:20:01 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:34.639 nvme0n1 00:38:34.639 23:20:01 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:38:34.639 23:20:01 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:34.639 23:20:01 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:34.639 23:20:01 ftl -- ftl/common.sh@28 -- # stores=568dcebd-1922-445a-ad60-d8b534097aa3 00:38:34.639 23:20:01 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:38:34.639 23:20:01 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 568dcebd-1922-445a-ad60-d8b534097aa3 00:38:34.898 23:20:02 ftl -- ftl/ftl.sh@23 -- # killprocess 84861 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@954 -- # '[' -z 84861 ']' 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@958 -- # kill -0 84861 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@959 -- # uname 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84861 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:34.898 killing process with pid 84861 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84861' 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@973 -- # kill 84861 00:38:34.898 23:20:02 ftl -- common/autotest_common.sh@978 -- # wait 84861 00:38:37.428 23:20:04 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:38:37.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:37.950 Waiting for block devices as requested 00:38:37.950 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:37.950 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:38.241 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:38:38.241 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:38:43.551 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:38:43.551 23:20:10 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:38:43.551 Remove shared memory files 00:38:43.551 23:20:10 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:43.551 23:20:10 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:38:43.551 23:20:10 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:38:43.551 23:20:10 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:38:43.551 23:20:10 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:43.551 23:20:10 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:38:43.551 00:38:43.551 real 11m27.399s 00:38:43.551 user 13m56.021s 00:38:43.551 sys 1m34.803s 00:38:43.551 23:20:10 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:43.551 23:20:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:43.551 ************************************ 00:38:43.551 END TEST ftl 00:38:43.551 ************************************ 00:38:43.552 23:20:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:43.552 23:20:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:43.552 23:20:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:43.552 23:20:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:43.552 23:20:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:43.552 23:20:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:43.552 23:20:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:43.552 23:20:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:43.552 23:20:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:43.552 23:20:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:43.552 23:20:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:43.552 23:20:10 -- common/autotest_common.sh@10 -- # set +x 00:38:43.552 23:20:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:43.552 23:20:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:43.552 23:20:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:43.552 23:20:10 -- common/autotest_common.sh@10 -- # set +x 00:38:46.084 INFO: APP EXITING 00:38:46.084 INFO: killing all VMs 00:38:46.084 INFO: killing vhost app 00:38:46.084 INFO: EXIT DONE 00:38:46.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:46.651 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:38:46.651 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:38:46.651 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:38:46.910 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:38:47.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:47.732 Cleaning 00:38:47.732 Removing: /var/run/dpdk/spdk0/config 00:38:47.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:47.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:47.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:47.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:47.732 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:47.732 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:47.732 Removing: /var/run/dpdk/spdk0 00:38:47.732 Removing: /var/run/dpdk/spdk_pid57752 00:38:47.733 Removing: /var/run/dpdk/spdk_pid57998 00:38:47.733 Removing: /var/run/dpdk/spdk_pid58238 00:38:47.733 Removing: /var/run/dpdk/spdk_pid58342 00:38:47.733 Removing: /var/run/dpdk/spdk_pid58398 00:38:47.733 Removing: /var/run/dpdk/spdk_pid58537 00:38:47.733 Removing: /var/run/dpdk/spdk_pid58555 00:38:47.733 Removing: /var/run/dpdk/spdk_pid58771 00:38:47.733 Removing: /var/run/dpdk/spdk_pid58888 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59000 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59128 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59236 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59281 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59319 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59395 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59507 00:38:47.733 Removing: /var/run/dpdk/spdk_pid59970 00:38:47.733 Removing: /var/run/dpdk/spdk_pid60051 00:38:47.733 Removing: /var/run/dpdk/spdk_pid60136 00:38:47.733 Removing: /var/run/dpdk/spdk_pid60152 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60322 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60338 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60497 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60519 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60594 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60612 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60687 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60705 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60911 00:38:47.991 Removing: /var/run/dpdk/spdk_pid60948 00:38:47.991 Removing: /var/run/dpdk/spdk_pid61031 00:38:47.991 Removing: /var/run/dpdk/spdk_pid61230 00:38:47.991 Removing: /var/run/dpdk/spdk_pid61326 00:38:47.991 Removing: /var/run/dpdk/spdk_pid61373 00:38:47.991 Removing: /var/run/dpdk/spdk_pid61834 00:38:47.991 Removing: /var/run/dpdk/spdk_pid61938 00:38:47.991 Removing: /var/run/dpdk/spdk_pid62058 00:38:47.991 Removing: /var/run/dpdk/spdk_pid62111 00:38:47.991 Removing: /var/run/dpdk/spdk_pid62142 00:38:47.991 Removing: /var/run/dpdk/spdk_pid62226 00:38:47.991 Removing: /var/run/dpdk/spdk_pid62875 00:38:47.991 Removing: /var/run/dpdk/spdk_pid62924 00:38:47.991 Removing: /var/run/dpdk/spdk_pid63424 00:38:47.991 Removing: /var/run/dpdk/spdk_pid63522 00:38:47.991 Removing: /var/run/dpdk/spdk_pid63650 00:38:47.991 Removing: /var/run/dpdk/spdk_pid63708 00:38:47.991 Removing: /var/run/dpdk/spdk_pid63734 00:38:47.991 Removing: /var/run/dpdk/spdk_pid63765 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65660 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65814 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65823 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65838 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65884 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65888 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65900 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65950 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65954 00:38:47.991 Removing: /var/run/dpdk/spdk_pid65966 00:38:47.991 Removing: /var/run/dpdk/spdk_pid66016 00:38:47.991 Removing: /var/run/dpdk/spdk_pid66020 00:38:47.991 Removing: /var/run/dpdk/spdk_pid66032 00:38:47.991 Removing: /var/run/dpdk/spdk_pid67458 00:38:47.991 Removing: /var/run/dpdk/spdk_pid67577 00:38:47.991 Removing: /var/run/dpdk/spdk_pid69007 00:38:47.991 Removing: /var/run/dpdk/spdk_pid70759 00:38:47.991 Removing: /var/run/dpdk/spdk_pid70844 00:38:47.991 Removing: /var/run/dpdk/spdk_pid70925 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71040 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71140 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71239 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71325 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71406 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71516 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71613 00:38:47.991 Removing: /var/run/dpdk/spdk_pid71721 00:38:48.250 Removing: /var/run/dpdk/spdk_pid71802 00:38:48.250 Removing: /var/run/dpdk/spdk_pid71888 00:38:48.250 Removing: /var/run/dpdk/spdk_pid71997 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72094 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72198 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72283 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72365 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72476 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72575 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72681 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72762 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72842 00:38:48.250 Removing: /var/run/dpdk/spdk_pid72920 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73001 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73110 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73203 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73309 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73395 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73472 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73549 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73629 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73749 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73845 00:38:48.250 Removing: /var/run/dpdk/spdk_pid73995 00:38:48.250 Removing: /var/run/dpdk/spdk_pid74300 00:38:48.250 Removing: /var/run/dpdk/spdk_pid74338 00:38:48.250 Removing: /var/run/dpdk/spdk_pid74804 00:38:48.250 Removing: /var/run/dpdk/spdk_pid74997 00:38:48.250 Removing: /var/run/dpdk/spdk_pid75103 00:38:48.250 Removing: /var/run/dpdk/spdk_pid75217 00:38:48.250 Removing: /var/run/dpdk/spdk_pid75276 00:38:48.250 Removing: /var/run/dpdk/spdk_pid75302 00:38:48.250 Removing: /var/run/dpdk/spdk_pid75600 00:38:48.250 Removing: /var/run/dpdk/spdk_pid75678 00:38:48.250 Removing: /var/run/dpdk/spdk_pid75785 00:38:48.250 Removing: /var/run/dpdk/spdk_pid76220 00:38:48.250 Removing: /var/run/dpdk/spdk_pid76372 00:38:48.250 Removing: /var/run/dpdk/spdk_pid77179 00:38:48.250 Removing: /var/run/dpdk/spdk_pid77328 00:38:48.250 Removing: /var/run/dpdk/spdk_pid77533 00:38:48.250 Removing: /var/run/dpdk/spdk_pid77641 00:38:48.250 Removing: /var/run/dpdk/spdk_pid77980 00:38:48.250 Removing: /var/run/dpdk/spdk_pid78244 00:38:48.250 Removing: /var/run/dpdk/spdk_pid78609 00:38:48.250 Removing: /var/run/dpdk/spdk_pid78810 00:38:48.250 Removing: /var/run/dpdk/spdk_pid78955 00:38:48.250 Removing: /var/run/dpdk/spdk_pid79019 00:38:48.250 Removing: /var/run/dpdk/spdk_pid79163 00:38:48.250 Removing: /var/run/dpdk/spdk_pid79199 00:38:48.250 Removing: /var/run/dpdk/spdk_pid79269 00:38:48.250 Removing: /var/run/dpdk/spdk_pid79479 00:38:48.250 Removing: /var/run/dpdk/spdk_pid79731 00:38:48.250 Removing: /var/run/dpdk/spdk_pid80134 00:38:48.250 Removing: /var/run/dpdk/spdk_pid80559 00:38:48.250 Removing: /var/run/dpdk/spdk_pid81000 00:38:48.509 Removing: /var/run/dpdk/spdk_pid81519 00:38:48.509 Removing: /var/run/dpdk/spdk_pid81668 00:38:48.509 Removing: /var/run/dpdk/spdk_pid81759 00:38:48.509 Removing: /var/run/dpdk/spdk_pid82376 00:38:48.509 Removing: /var/run/dpdk/spdk_pid82451 00:38:48.509 Removing: /var/run/dpdk/spdk_pid82899 00:38:48.509 Removing: /var/run/dpdk/spdk_pid83293 00:38:48.509 Removing: /var/run/dpdk/spdk_pid83800 00:38:48.509 Removing: /var/run/dpdk/spdk_pid83923 00:38:48.509 Removing: /var/run/dpdk/spdk_pid83987 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84051 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84107 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84171 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84389 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84470 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84538 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84617 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84657 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84731 00:38:48.509 Removing: /var/run/dpdk/spdk_pid84861 00:38:48.509 Clean 00:38:48.509 23:20:15 -- common/autotest_common.sh@1453 -- # return 0 00:38:48.509 23:20:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:48.509 23:20:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.509 23:20:15 -- common/autotest_common.sh@10 -- # set +x 00:38:48.509 23:20:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:48.509 23:20:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:48.509 23:20:15 -- common/autotest_common.sh@10 -- # set +x 00:38:48.767 23:20:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:48.767 23:20:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:48.767 23:20:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:48.767 23:20:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:48.768 23:20:15 -- spdk/autotest.sh@398 -- # hostname 00:38:48.768 23:20:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:48.768 geninfo: WARNING: invalid characters removed from testname! 00:39:15.360 23:20:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:17.898 23:20:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:20.436 23:20:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:22.340 23:20:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:24.875 23:20:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:26.791 23:20:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:29.341 23:20:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:29.341 23:20:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:29.341 23:20:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:39:29.341 23:20:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:29.341 23:20:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:29.341 23:20:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:29.341 + [[ -n 5251 ]] 00:39:29.341 + sudo kill 5251 00:39:29.351 [Pipeline] } 00:39:29.368 [Pipeline] // timeout 00:39:29.375 [Pipeline] } 00:39:29.390 [Pipeline] // stage 00:39:29.396 [Pipeline] } 00:39:29.411 [Pipeline] // catchError 00:39:29.422 [Pipeline] stage 00:39:29.424 [Pipeline] { (Stop VM) 00:39:29.437 [Pipeline] sh 00:39:29.765 + vagrant halt 00:39:33.056 ==> default: Halting domain... 00:39:39.759 [Pipeline] sh 00:39:40.040 + vagrant destroy -f 00:39:43.337 ==> default: Removing domain... 00:39:43.605 [Pipeline] sh 00:39:43.882 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:39:43.889 [Pipeline] } 00:39:43.903 [Pipeline] // stage 00:39:43.908 [Pipeline] } 00:39:43.921 [Pipeline] // dir 00:39:43.925 [Pipeline] } 00:39:43.939 [Pipeline] // wrap 00:39:43.944 [Pipeline] } 00:39:43.957 [Pipeline] // catchError 00:39:43.966 [Pipeline] stage 00:39:43.968 [Pipeline] { (Epilogue) 00:39:43.980 [Pipeline] sh 00:39:44.262 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:50.853 [Pipeline] catchError 00:39:50.855 [Pipeline] { 00:39:50.868 [Pipeline] sh 00:39:51.162 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:51.162 Artifacts sizes are good 00:39:51.185 [Pipeline] } 00:39:51.216 [Pipeline] // catchError 00:39:51.222 [Pipeline] archiveArtifacts 00:39:51.227 Archiving artifacts 00:39:51.315 [Pipeline] cleanWs 00:39:51.324 [WS-CLEANUP] Deleting project workspace... 00:39:51.324 [WS-CLEANUP] Deferred wipeout is used... 00:39:51.332 [WS-CLEANUP] done 00:39:51.333 [Pipeline] } 00:39:51.342 [Pipeline] // stage 00:39:51.345 [Pipeline] } 00:39:51.356 [Pipeline] // node 00:39:51.360 [Pipeline] End of Pipeline 00:39:51.393 Finished: SUCCESS